Archive for the 'Talks' Category

Generalize clinicaltrials.gov and register research hypotheses before analysis

Stanley Young is Director of Bioinformatics at the National Institute for Statistical Sciences, and gave a talk in 2009 on problems in modern scientific research. For example: 1 in 20 NIH-funded studies actually replicates; closed data and opacity; model selection for significance; multiple comparisons.. Here is the link to his talk: Everything Is Dangerous: A Controversy. There are a number of good examples in the talk and Young anticipates and is more intellectually coherent than the New Yorker article The Truth Wears Off if you were interested in that.

Idea: Generalize clinicaltrials.gov, where scientists register their hypotheses prior to carrying out their experiment. Why not do this for all hypothesis tests? Have a site where the hypotheses are logged and time stamped before researchers gather the data or carry out the actual hypothesis testing for the project. I’ve heard this idea mentioned occasionally and both Young and Lehrer mentions it as well.

Science and Video: a roadmap

Once again I find myself in the position of having collected slides from talks, and having audio from the sessions. I need a simple way to pin these together so they form a coherent narrative and I need a common sharing platform. We don’t really have to see the speaker to understand the message but we needs the slides and the audio to play in tandem with the slides changing at the correct points. Some of the files are quite large: slides decks can be over 100MB and right now the audio file I have is 139MB (slideshare has size limits that don’t accomodate this).

I’m writing because I feel the messages are important, and need to be available to a wider audience. This is often our culture, our heritage, our technology, our scientific knowledge and our shared understanding. These presentations need to be available not just on principled open access grounds, but it is imperative that other scientists hear these messages as well, amplifying scientific communication.

At a bar the other night a friend and I came up with the idea of S-SPAN: a C-SPAN for science. Talks and conferences could be filmed and shared widely on an internet platform. Of course these platforms exist and some even target scientific talks but the content also needs to be marshalled and directed onto the website. Some of the best stuff I’ve even seen has floated into the ether.

So, I make an open call for these two tasks: a simple tool to pin together slides and audio (and sides and video), and an effort to collate video from scientific conference talks and film them if it doesn’t exist, all onto a common distribution platform. S-SPAN could start as raw and underproduced as C-SPAN, but I am sure it would develop from there.

I’m looking at you, YouTube.

My Symposium at the AAAS Annual Meeting: The Digitization of Science

Yesterday I held a symposium at the AAAS Annual Meeting in Washington DC, called “The Digitization of Science: Reproducibility and Interdisciplinary Knowledge Transfer,” that was intended to bring attention to how massive computation is changing the practice of science, particularly the lack of reproducibility of published computational scientific results. The fact is, most computational scientific results published today are unverified and unverifiable. I’ve created a page for the event here, with links to slide decks and abstracts. I couldn’t have asked for a better symposium, thanks to the wonderful speakers.

The first speaker was Keith A. Baggerly, who (now famously) tried to verify published results in Nature Medicine and uncovered a series of errors that led to the termination of clinical trials at Duke that were based on the original findings, and the resignation of one of the investigators (his slides). I then spoke about policies for realigning the IP framework scientists are under with their longstanding norms, to permit sharing of code and data (my slides). Fernando Perez described how computational scientists can learn about not only code sharing, quality control, and project management from the Open Source Software, but how they have in fact developed what is in effect a deeply successful system of peer review for code. Code is verified line by line before incorporated into the project, and there are software tools to enable the communication between reviewer and submitted, down to the line of code (his slides).

Michael Reich then presented GenePattern, an OS independent tool developed with Microsoft for creating data analysis pipelines and incorporating them into a Word doc. Once in the document, tools exist to click and recreate the figure from the pipeline and examine what’s been done to the data. Robert Gentlemen advocated the entire research paper as the unit of reproducibility, and David Donoho presented a method for assigning a unique identifier to figures within the paper, that creates a link for each figure and permits its independent reproduction (the slides). The final speaker was Mark Liberman, who showed how the human language technology community had developed a system of open data and code in their efforts to reduce errors in machine understanding of language (his slides). All the talks pushed on delineations of science from non-science, and it was probably best encapsulated with a quote Mark introduced from John Pierce, a Bell Labs executive in 1969, how “To sell suckers, one uses deceit and offers glamor.”

There was some informal feedback, with a prominent person saying that this session was “one of the most amazing set of presentations I have attended in recent memory.” Have a look at all the slides and abstracts, including links and extended abstracts.

Update: Here are some other blog posts on the symposium: Mark Liberman’s blog and Fernando Perez’s blog.

Video from "The Great Climategate Debate" held at MIT December 10, 2009

This is an excellent panel discussion regarding the leaked East Anglia docs as well as standards in science and the meaning of the scientific method. It was recorded on Dec 10, 2009, and here’s the description from the MIT World website: “The hacking of emails from the University of East Anglia’s Climate Research Unit in November rocked the world of climate change science, energized global warming skeptics, and threatened to derail policy negotiations at Copenhagen. These panelists, who differ on the scientific implications of the released emails, generally agree that the episode will have long-term consequences for the larger scientific community.”

Moderator: Henry D. Jacoby, Professor of Management, MIT Sloan School of Management, and Co-Director, Joint Program on the Science and Policy of Global Change, MIT.

Panelists:
Kerry Emanuel, Breene M. Kerr Professor of Atmospheric Science, Department of Earth, Atmospheric Science and Planetary Sciences, MIT;
Judith Layzer, Edward and Joyce Linde Career Development Associate Professor of Environmental Policy, Department of Urban Studies and Planning, MIT;
Stephen Ansolabehere, Professor of Political Science, MIT, and
Professor of Government, Harvard University;
Ronald G. Prinn, TEPCO Professor of Atmospheric Science, Department of Earth, Atmospheric and Planetary Sciences, MIT Director, Center for Global Change Science; Co-Director of the MIT Joint Program on the Science and Policy of Global Change;
Richard Lindzen, Alfred P. Sloan Professor of Meteorology, Department of Earth, Atmospheric and Planetary Sciences, MIT.

Video, running at nearly 2 hours, is available at http://mitworld.mit.edu/video/730.

What's New at Science Foo Camp 2009

SciFoo is a wonderful annual gathering of thinkers about science. It’s an unconference and people who choose to speak do so. Here’s my reaction to a couple of these talks.

In Pete Worden’s discussion of modeling future climate change, I wondered about the reliability of simulation results. Worden conceded that there are several models doing the same predictions he showed, and they can give wildly opposing results. We need to develop the machinery to quantify error in simulation models just as we routinely do for conventional statistical modeling: simulation is often the only empirical tool we have for guiding policy responses to some of our most pressing issues.

But the newest I saw was Bob Metcalfe’s call for us to imagine what to do with the coming overabundance of energy. Metcalfe likened solving energy scarcity to the early days of Internet development: because of the generative design of Internet technology, we now have things that were unimagined in the early discussions, such as YouTube and online video. According to Metcalfe, we need to envision our future as including a “squanderable abundance” of energy, and use Internet lessons such as standardization and distribution of power sources to get there, rather than building for energy conservation.

Cross posted on The Edge.

Bill Gates to Development Researchers: Create and Share Statistics

I was recently in Doha, Qatar, presenting my research on global communication technology use and democratic tendency at ICTD09. I spoke right before the keynote, Bill Gates, whose main point was that when you engage in a goal-oriented activity, such as development, progress can only be made when you measure the impact of your efforts.

Gates paints a positive picture, measured by deaths before age 5. In the 1880’s he says about 30% of children died before their 5th birthday in most countries, and this gradually moved to 20 million in 1960 and then 10 million in 2006. Gates postulates this is due to rising income levels (40% of decrease), and medical innovation such as vaccines (60% of decrease).

This is an example of Gates’ mantra: you can only improve what you can measure. For example, an outbreak of measles tells you your vaccine system isn’t functioning. In his example about childhood deaths, he says we are getting somewhere here because we are measuring the value for money spent on the problem.

Gates thinks the wealthy in the world need to be exposed to these problems ideally through intermingling, or since that is unlikely to happen, through statistics and data visualization. Collect data, then communicate it. In short, Gates advocates creating statistics through measuring development efforts, and changing the world by exposing people to these data.

Wolfram|Alpha Demoed at Harvard: Limits on Human Understanding?

Yesterday Stephen Wolfram gave the first demo of Wolfram|Alpha, coming in May, what he modestly describes as a system to make our stock of human knowledge computable. It includes not just facts, but also our algorithmic knowledge. He says, “Given all the methods, models ,and equations that have been created from science and analysis – take all that stuff and package it so that we can walk up to a website and ask it a question and have it generate the knowledge that we want. … like interacting with an expert.”

It’s ambitious, but so are Wolfram’s previous projects: Mathematica and Mathworld. I remember relying on Mathworld as a grad student – it was excellent, and so I remember when it suddenly disappeared when the content was to be published as a book. In 2002 he published A New Kind of Science, arguing that all processes, including thought, can be viewed as computations and a simple set of rules can describe a complex system. This thinking is clearly evident in Wolfram|Alpha and here are some key examples.
Continue reading ‘Wolfram|Alpha Demoed at Harvard: Limits on Human Understanding?’

Stuart Shieber and the Future of Open Access Publishing

Back in February Harvard adopted a mandate requiring its faculty member to make their research papers available within a year of publication. Stuart Shieber is a computer science professor at Harvard and responsible for proposing the policy. He has since been named director of Harvard’s new Office for Scholarly Comminication.

On November 12 Shieber gave a talk entitled “The Future of Open Access — and How to Stop It” to give an update on where things stand after the adoption of the open access mandate. Open access isn’t just something that makes sense from an ethical standpoint, as Shieber points out that (for-profit) journal subscription costs have risen out of proportion with inflation costs and out of proportion with the costs of nonprofit journals. He notes that the cost per published page in a commercial journal is six times that of the nonprofits. With the current library budget cuts, open access — meaning both access to articles directly on the web and shifting subscriptions away from for-profit journals — is something that appears financially unavoidable.

Here’s the business model for an Open Access (OA) journal: authors pay a fee upfront in order for their paper to be published. Then the issue of the journal appears on the web (possibly also in print) without an access fee. Conversely, traditional for-profit publishing doesn’t charge the author to publish, but keeps the journal closed and charges subscription fees for access.

Shieber recaps Harvard’s policy:

1. The faculty member grants permission to the University to make the article available through an OA repository.

2. There is a waiver for articles: a faculty member can opt out of the OA mandate at his or her sole discretion. For example, if you have a prior agreement with a publisher you can abide by it.

3. The author themselves deposits the article in the repository.

Shieber notes that the policy is also because it allows Harvard to make a collective statement of principle, systematically provide metadata about articles, it clarifies the rights accruing to the article, it allows the university to facilitate the article deposit process, it allows the university to negotiate collectively, and having the mandate be opt out rather than opt in might increase rights retention at the author level.

So the concern Shieber set up in his talk is whether standards for research quality and peer review will be weakened. Here’s how the dystopian argument runs:

1. all universities enact OA policies
2. all articles become OA
3. libraries cancel subscriptions
4. prices go up on remaining journals
5. these remaining journals can’t recoup their costs
6. publishers can’t adapt their business model
7. so the journals and the logistics of peer review they provide, disappear

Shieber counters this argument: 1 through 5 are good because journals will start to feel some competitive pressure. What would be bad is if publishers cannot change their way of doing business. Shieber thinks that even if this is so it will have the effect of pushing us towards OA journals, which provide the same services, including peer review, as the traditional commercial journals.

But does the process of getting there cause a race to the bottom? The argument goes like this: since OA journals are paid by the number of articles published they will just publish everything, thereby destroying standards. Shieber argues this won’t happen because there is price discrimination among journals – authors will pay more to publish in the more prestigious journals. For example, PLOS costs about $3k, Biomed Central about $1000, and Scientific Publishers International is $96 for an article. Shieber also makes an argument that Harvard should have a fund to support faculty who wish to publish in an OA journal and have no other way to pay the fee.

This seems to imply that researchers with sufficient grant funding or falling under his proposed Harvard publication fee subsidy, would then be immune to the fee pressure and simply submit to the most prestigious journal and work their way down the chain until their paper is accepted. This also means that editors/reviewers decide what constitutes the best scientific articles by determining acceptance.

But is democratic representation in science a goal of OA? Missing from Shieber’s described market for scientific publications is any kind of feedback from the readers. The content of these journals, and the determination of prestige, is defined solely by the editors and reviewers. Maybe this is a good thing. But maybe there’s an opportunity to open this by allowing readers a voice in the market. This could done through ads or a very tiny fee on articles – both would give OA publishers an incentive to respond to the preferences of the readers. Perhaps OA journals should be commercial in the sense of profit-maximizing: they might have a reason to listen to readers and might be more effective at maximizing their prestige level.

This vision of OA publishing still effectively excludes researchers who are unable to secure grants or are not affiliated with a university that offers a publication subsidy. The dream behind OA publishing is that everyone can read the articles, but to fully engage in the intellectual debate quality research must still find its way into print, and at the appropriate level of prestige, regardless of the affiliation of the researcher. This is the other side of OA that is very important for researchers from the developing world or thinkers whose research is not mainstream (see, for example, Garrett Lisi a high impact researcher who is unaffiliated with an institution).

The OA publishing model Shieber describes is a clear step forward from the current model where journals are only accessible by affiliates of universities who have paid the subscription fees. It might be worth continuing to move toward an OA system where, not only can anyone access publications, but any quality research is capable of being published, regardless of the author’s affiliation and wealth. To get around the financial constraints one approach might be to allow journals to fund themselves through ads, or provide subsidies to certain researchers. This also opens up the idea of who decides what is quality research.

Benkler: We are collaborators, not knaves

Yochai Benkler gave a talk today in reception of his appointment as the Jack N. and Lillian R. Berkman Professor of Entrepreneurial Legal Studies at Harvard Law School. Jack Berkman (now deceased) is the father of Myles Berkman, whose family endowed both the Berkman Center (where I am a fellow) and Benkler’s professorial chair.

His talk was titled “After Selfishness: Wikipedia 1, Hobbes 0 at Half Time” and he sets out to show that there is a sea change happening in the study of organizational systems that far better reflects how we actually interact, organize, and operate. He explains that the collaborative movements we generally characterize as belonging to the new internet age (free and open source software, wikipedia) are really just the instantiation of a wider and pervasive, in fact completely natural and longstanding, phenomena in human life.

This is due to how we can organize capital in the information and networked society: We own the core physical means of production as well as knowledge, insight, and creativity. Now we’re seeing longstanding society practices, such as non-hierarchical norm generation and collaboration more from the periphery of society to the center of our productive enterprises. Benkler’s key point in this talk is that this shift is not limited to Internet-based environments, but part of a broader change happening across society.

So how to we get people to produce good stuff? money? prizes? competition? Benkler notes the example of YouTube.com – contributors are not paid yet the community thrives. Benkler hypothesizes that the key is that people feel secure in their involvement with the community: not paying but creating a context where people feel secure in their collaboration in a system. Another example is Kaltura.com: Benkler attributes their success to ways they have found to assure contributors that when you produce you will be able to control what you produce. Cash doesn’t change hands. The challenge is to learn about human collaboration in general from these web-based examples. In Benkler’s words “replacing Leviathan with a collaborative system.”

Examples outside the web-based world include GM’s experience with it’s Fremont plant. This plant was among the worst performance in the company. GM shut it down for two years and brought it back 85% staffed by the previous workforce, the same union, but reorganized collaboratively to align incentives. This means there are no longer process engineers on the shop floor and direct control over experimentation and flow at the team level is gone. The plant did so well it forced the big three to copy although they did so in less purely collaborative ways, such as retaining competitive bidding. Benkler’s point is that there is a need for long term relationships based on trust. An emphasis on norms and trust, along with greater teamwork and autonomy for workers implies a more complex system with less perfect control than Hobbes’ Leviathan vision. The world changes too quickly for the old encumbered hierarchical model of economic production.

Benkler thinks this leads us to study social dynamics, an open field without many answers yet. He also relates this work to evolutionary biology: from group selection theory of the 50’s to the individualistic conception in Dawkins’ theory of the selfish gene in the 70’s, and now to multi-level selection and cooperation as a distinct driving force in evolution as opposed to the other way around. This opens a vein of research in empirical deviations from selfishness, as a pillar of homo economicus, just as Kahneman and Tversky challenged the twin pillar of rationality.

Benkler’s vision is to move away from the simple rigid hierarchical models toward ones that are richer and more complex and can capture more of our actual behavior, while still being tractable enough to produce predictions and a larger understanding of our world.

Justice Scalia: Populist

Justice Scalia (HLS 1960) is speaking at the inaugural Herbert W. Vaughan Lecture today at Harvard Law School. It’s packed – I arrived at 4pm for the 4:30 talk and joined the end of a long line…. then was immediately told the auditorium was full and was relegated to an overflow room with video. I’m lucky to have been early enough to even see it live.

The topic of the talk hasn’t been announced and we’re all waiting with palpable anticipation in the air. The din is deafening.

Scalia takes the podium. The title of his talk is “Methodology of Originalism.”

His subject is the intersection of constitutional law and history. He notes that the orthodox view of constitutional interpretation, up to the time of the Warren Court, was that the constitution is no different from any other legal text. That is, it bears a static meaning that doesn’t change from generation to generation, although it gets applied to new situations. The application to pre-existing phenomena doesn’t change over time, but these applications do provide the data upon which to decide the cases on the new phenomena.

Things changed when the Warren court permitted in New York Times Co. v. Sullivan 376 U.S. 254 (1964) that good faith libel of public figures was good for democracy. Scalia says this might be so but that change should be made by statute and not by the court. He argues this is respectful of the democratic system in that the laws are reflections of people’s votes. This is the first, and perhaps the best known, of two ways Scalia comes across as populist in this talk. In a question at the end he says that the whole theory of democracy is that a justice is not supposed to be writing a constitution but just reflecting what the american people have decided. If you believe in democracy, he explains, you believe in majority rules. In liberal democracies like ours we have made exceptions and given protection to certain minorites such as religious or political minorities. But his key point is that the people made these exceptions, ie. they were adopted in a democratic fashion.

But doesn’t originalism require you to know the original meaning of a document? and isn’t history a science unto itself, and different from law? Scalia responds to this argument by saying first that history is central to the law, in the very least through the fact that the meanings of words change over time. So inquiry into the past certainly has to do with the law and vice versa. He notes that the only way to assign meaning to many of the phrases in the constitution is through historical understanding: for example “letters of mark and reprisal” and “habeas corpus” etc. Secondly, he gives a deeply non-elitist argument about the quality of expert vs nonexpert reasoning. This is the second way Scalia expresses a populist sentiment.

In District of Columbia v. Heller, 554 U.S. ___ (2008), the petitioners contended that the term “bear arms” only meant a military interpretation, although there are previous cases that show this isn’t true. But this case was about more than the historical usage of words: the 2nd Amendment didn’t say “the people shall have the right to keep and bear arms,” for example, but that “the right of the people to keep and bear arms shall not be infringed” – as if this was a pre-existing right. So Scalia argues that here there was a place for historical inquiry here that showed there was such a pre-exsiting right: in the English bill of Rights of 1689 (found by Blackstone). So now it’s hard to see the 2nd Amendment as more than the right to join a militia. Which goes with the prologue of the 2nd Amendment: the right of a well regulated militia to keep arms. This goes much further than just lexicography.

So what can be expected of judges? Scalia argues, like Churchill’s argument for democracy, that all an originalist need show is that originalism beats the alternatives. He says this isn’t hard to do since inquiry into original meaning is not as difficult as what opponents suggest. He says one place to look when the framer’s intent is not clear is to look at states’ older interpretations. And in the vast majority of cases, including the most controversial ones, the originalist interpretation is clear. His examples of cases with clear original intent are abortion, a right to engage in homosexual sodomy, or assisted suicide, or prohibition of the death penalty (the death penalty was historically the only penalty for a felony) – these rights are not found in the constitution. Determining whether there should be (and hence is, for a non-originalist judge) a right to abortion or same sex marriage or whatnot, requires moral philosophy which Scalia says is harder than historical inquiry.

He also uses as evidence for the symbiotic relationship between law and history that history departments have legal historical scholars and law schools have historical experts.

Scalia gives the case of Thompson v. Oklahoma 487 U.S. 815 (1988) as an example of a situation in which historical reasoning played little part and he uses this as a baseline to argue that the role of historical reasoning in Supreme Court opinions is increasing. The briefs in Thompson were of no help with historical questions since they did not touch on the history of the 8th Amendment, but Scalia says this isn’t surprising since the history of the clause had been written out of the argument by previous thinking. Another case, Morrison v. Olsen 487 U.S. 654 (1988), considered a challenge to the statue creating the independent counsel. Scalia thinks these questions could benefit little from historical clarification, so the briefing in Morrison focused on historical questions such as what did the term “inferior officers” mean at the time of the founding. Two briefs authored by HLS faculty (Cox, Fried) provided useful historical material, but the historical referencing was sparse and none of these briefs were written by scholars of legal history.

In contrast, in 2007 in Heller there was again little historical context but in this case many amicus briefs focused on historical arguments and material. This is a very different situation to that of 20 years ago. There were several briefs from legal historical experts and each contained detailed discussions of the historical right to bear arms in England and here at the time of the founding. Such foci were the heart of the brief, and not relegated to a footnote as it likely would have been 20 years ago, and was in Morrison. Scalia thinks this reinforces the use of the originalist approach, by showing how easy it is compared to other approaches.

Scalia eschews amicus briefs in general, especially insofar as they repeat the arguments made by the parties because of their pretense to scholarly impartiality which may convince judges to sign on to briefs that are nothing but impartial. “Disinterested scholarship and advocacy do not mix well.”

Scalia takes on a second argument made against the use of history in the courts – that the history used is “law office history.” That is, the selection of data favorable to the position being advanced without regard or concern for contradictory data or relevance. Here the charge is not incompentance but tendentiousness: advocates cannot be trusted to present an unbiased view. But of course! says Scalia, since they are advocates. But insofar as the criticism is directed at the court, it is essential that the adjudicator is impartial. “Of course a judicial opinion can give a distorted picture of historical truth, but this would be an inadequate historical opinion and not that which is expected” from the Court. Scalia admonishes that one must review the historical evidence in detail rather than raise the “know nothing” cry.

This is Scalia’s second populist argument: it is deeply non-elitist since it seems to imply that nonprofessional historians are capable of coming up with good historical understanding. It provides an example that dovetails with the notion of opening knowledge and the respect for autonomy in allow individuals to evaluate reasoning and data and come to their own conclusions (and even be right sometimes). Scalia notes that he sees the role of the Court as finding conclusions from these facts, which is different from the role of the historians.

But he feels quite differently about the conclusions of experts in other fields. For example, in overruling Dr. Miles Medical Co. v. John D. Park and Sons, 220 U.S. 373 (1911), holding that resale price maintenance isn’t a per se violation of the Sherman Act, he didn’t feel uncomfortable since this is the almost uniform view of professional economists. Scalia seems to be saying that experts are probably right more often than nonexperts, but nonexperts can also contribute. He phrases this as an expert in judicial analysis – and he says there is a difference in historical analysis vs, say, the type of engineering analysis that might be required for patent cases. He makes a distinction between types of subject which are more susceptible to successful nonexpert analysis.

Scalia then advocates for submission of analysis to public scrutiny with data open, thus allowing suspect conclusions to be challenged. The originalist will reach substantive results he doesn’t personally favor and the reasoning process should be open. Scalia notes that this is more honest that judges who reason morally, who will never disagree with their own opinions.

There was a question that got the audience laughing at the end. The questioner claims to have approached a Raytheon manufacturing facility to buy a missile or tank, since in his view the 2nd Amendment is about keeping the government scared of the people, and somehow having a gun when the government has more advanced weaponry misses the point. Scalia thinks this is outside the scope of the 2nd Amendment because “You can’t bear a tank!”

Legal Barriers to Open Science: my SciFoo talk

I had an amazing time participating at Science Foo Camp this year. This is a unique conference: there are 200 invitees comprising some of the most innovative thinkers about science today. Most are scientists but not all – there are publishers, science reporters, scientific entrepreneurs, writers on science, and so on. I met old friends there and found many amazing new ones.

One thing that I was glad to see was the level of interest in Open Science. Some of the top thinkers in this area were there and I’d guess at least half the participants are highly motivated by this problem. There were sessions on reporting negative results, the future of the scientific method, reproducibility in science. I organized a session with Michael Nielsen on overcoming barriers in open science. I spoke about the legal barriers and O’Reilly Media has made the talk available here.

I have papers forthcoming on this topic you can find on my website.

Cass Sunstein and Yochai Benkler at MIT – Our Digitized World: The Good, the Bad, the Ugly.

Last Thursday April 10 MIT hosted a debate/discussion between Yochai Benkler and Cass Sunstein (audio can be found here). Both are Harvard Law Professors (Sunstein coming here from Chicago in the fall) and, perhaps unsurprisingly, the discussion became very philosophical. Both have written prolifically on technology and our future, especially Benkler’s The Wealth of Networks and Sunstein’s Infotopia and Republic.com 2.0. Henry Jenkins is moderating. he is co-director of Comparative Media Studies and Professor of Humanities at MIT. Jenkins is using those three books as the basis for his questions.

The first question Jenkins poses asks for metrics on how to measure the quality of online democracy. He quotes from both Sunstein and Benkler’s books to set off the dueling:

Sunstein1: “Any well functioning society depends on relationships of trust and reciprocity, in which people see their fellow citizens as potential allies, willing to help, and deserving of help when help is needed.”

Sunstein2: “A well functioning society of free expression must have two distinct requirements: first, people should be exposed to materials that they would not have chosen in advance, and second, many or most citizens have a range of common experiences.”

Benkler: “The new freedom holds great practical promise: as a dimension of individual freedom; as a platform for better democratic participation; as a medium to foster a more critical and self-reflective culture; and in an increasingly information-dependent global economy, as a mechanism to achieve improvements in human development everywhere.”

Jenkins asks the professors to give the current space a grade. Sunstein ranks it a C- since there is still babble and chaos and cruelty, even though there is order and brilliance and ingenuity. He likes Benkler’s idea of a self-reflective culture willing ot appraise itself, but his sense is that the internet is the opposite of self-reflection and provides only for entrenchment of pre-existing views.

Benkler gives a higher grade than C- and ascribes this to the importance of the degree of constraint on action being lower on the internet – this is determinative of how evaluate “normative life lived as a practical matter”. He agrees that a well-functioning society depends on trust and reciprocity but finds this in existence on the web through pervasive collaboration. He contrasts this with the authority driven approach traditionally used by the main stream media.

Benkler states that Sunstein takes too passive a view of citizenship in his description of the requirements of a system of free expression. He doesn’t envision citizens as passively exposed to streams of information and equipped with some pre-existing common frame of reference. Benkler imagines a capacity to act, intake, and filter for accreditation and salience, and ultimately set the current agenda. He sees freedom of expression manifested in part by participating in production of the agenda and claims this view will make the networked public sphere more attractive than Sunstein sees it, which will have the result that main stream media will appear more attractive.

At this Sunstein concedes his grade of C- was probably too harsh and he meant it in comparison to a realistic ideal, rather than a historic comparison. We’re doing better than in 1975. In response to Benkler’s point about passivity he states that his calls for exposure to new materials and shared experiences are only necessary conditions and they act as a counterweight to the notion that with unlimited free choice comes a capacity for self-sorting of internet communication. His sense is that “real internet geeks” come close to being libertarians in the University of Chicago tradition, so this notion of capacity becomes idealized as follows: if you are sovereign over your choices we have reached the ideal. Sunstein resists this and says we need to judge by outcomes: in a well functioning system you don’t construct a Daily Me and your attention needs to be grabbed or else you’ll never realize your interest in other issues. Self-sorting alone is too risky to be a reliable mechanism for people to get a good understanding of issues, so his two conditions become necessary features of the web and preconditions for a well functioning democratic society.

He thinks this paints a picture of people’s interaction with the web as more passive than what he meant. Active citizenship is fueled by shared experiences and unanticipated exposure to new materials and ideas. He cites national holidays like Martin luther King day or July Fourth and enabling us to see each others as involved in a common enterprise. This engenders a participatory approach to societal life among citizens.

Benkler responds that the difference between his and Sunstein’s position is power and context, freedom and constraint. He questions whether Sunstein’s proposed necessary condition of a common experience would result in something closer to traditional main stream media being desirable, where the sharing of experience was often through a government controlled agency or a newspaper. Benkler defines an elite as someone who can affect the agenda and observes that today that is a few million versus how it used to be a few thousand. So power is being diffused in myriad different ways. The example he gives is from the net roots of the Democratic party: citizens can now move their donations to marginal seats away from the war chest of safe seats rather than this being an internal decision by the party bosses. This freedom, what Benkler calls the “I can affect” freedom, is what he is interested in.

The second question Jenkins poses also starts with quotes, and he asks whether we are in danger of excessive fragmentation on the web:

Continue reading ‘Cass Sunstein and Yochai Benkler at MIT – Our Digitized World: The Good, the Bad, the Ugly.’

Amartya Sen at the Aurora Forum at Stanford University: Global Solidarity, Human Rights, and the End of Poverty

This is a one day conference to commemorate Martin Luther King’s “The Other America” in his 1967 speech at Stanford, and heed that speech’s call to create a more just world.

Mark Gonnerman, director of the Aurora Forum introduces the event by noting that economic justice is the main theme of King’s legacy. He references King’s 1948 paper where he lays out his mission as a minister, in which his goal is to deal with unemployment, slums, and economic insecurity. He doesn’t mention civil rights. So the effect of Rosa Parks was to turn him in a difference direction from his original mission, to which he returned, which is the gulf between rich and poor. Gonnerman reminds us of the interdependence of global trade and how, even before we leave the house for work, we have used products from all parts of the globe, rich and poor. He quotes King that the agony of the poor enriches the rest.

Thomas Nazario, founding director of The Forgotten International, outlines the face of poverty. He lists the 5 problems in the UN Millennium Report as the charge for the coming generation:

1. global warming
2. world health, including basic health and pandemic avoidance
3. war and nuclear proliferation
4. protection of human rights
5. world poverty

He describes world poverty in two ways: the first is by focusing on the gap between rich and poor. He says there are about 1000 billionaires and claims their money could provide services to half the people on Earth. The second way is to focus on the suffering associated with poverty. Nazario shows us some compelling images of poverty and busts some myths: children do go through garbage and fight rats and other vermin (usually dying before age 5); impoverished people tend to live around rivers since the riverbank is common land since it floods regularly; images of Ethiopia in the 1980’s war, conflict and famine (he notes that when there is extreme poverty, there is extreme fragility of life – any perturbation in the environment will cause death). He says 6 million children die before the age of 5 of hunger and lack of medical care. He also busts the myth that most of the poverty in the world is in Africa – it is in Asia, especially in India. There are 39 million street children in the world, often living in sewers. Of course, poverty is a cause of illiteracy not only because of the cost of education but because the impoverished children usually work to survive.

Amartya Sen is Lamont Professor and Professor of Economics and HIstory, Harvard University. He is a 1998 nobel prize winner in economics and I wrote a book review here of his book _Development as Freedom_. His talk has two components: he speaks first about global poverty and next about human rights. He begins by noting that hope for humanity, as Martin Luther King emphasized, is essential for these topics. Sen hopes the easily preventable deaths of millions of children is not an inescapable human condition and the fatalism about this in the developed world recedes. He also takes on the anti-globalization viewpoint by noting that globalization can be seen as a great contributor to world wealth. He insists globalization is a key component to reform, as there is an enormous positive impact to bringing people together, but the sharing of the spoils needs to be more equitable. Sen advocates a better understanding of economics to help us reform world development institutions, but with a caveat: “a market is as good as the company it keeps.” By this he means that circumstances such as the current conditions governing the distribution of resources or the ability of people to enter market transactions for example, depend on things such as the availability of healthcare and the existence of patents and contract laws conducive to trade.

Sen distinguishes short run and long run policies. In the long run the goal is to keep unemployment low in all countries (so for example he advocated government help in training and job location for Americans whose jobs have become obsolete due to technological progress). In the short run it is essential to have an adequate system of social safety nets that provide a minimum income, healthcare, and children’s schooling (which has long run effects of people’s adaptability in the workforce). Sen eschews economic stagnation and the rejection of economic reform.

Sen is very concerned that the fruits of globalization are not being justly shared and, even though globalization does bring economic benefit for all, he sees this inequality as the root of poverty. He also warns people not to rely on “the market outcome” as a way of washing your hands of the problem since the outcome of the market relies on a number of factors, such as resource ownership patterns, various rules of operation (like antitrust and patent laws), that will give different prices and different income equality.

Sen, consistent with his hopeful theme, notes important things subject to reform and change:

1. an adequately strong global effort to combat lack of education and healthcare
2. improving existing patent laws and reduction of arms supply

For the first point, there is a need for further worldwide cooperation to combat illiteracy and provide other social services. Sen suggests immediate remedies such as halting the repression of exports from poor countries, and other longer term remedies like reconsidering the 1940’s legacy of global institutions such as the UN, and reforming patent systems that prevent getting drugs to poor countries. After all, understanding and modifying incentive structures is “what economics is supposed to be about.” Continuing the second point, Sen believes the globalized trade in arms causes regional tension and global tension from the trade. This isn’t a problem confined to poor countries, on the contrary, the G8 consistently sell more than 80% of arms exports (with about 2/3rd of American arms exports going to developing countries). The Security Council of the G8 were also responsible for more than 80% of the global arms trade (witness this issue has never been discussed in the Security Council). There is a cascade effect here – warlords can rely on American or Russian support for their subversion of economic order and peace (Sen mentions Mobutu as a case in point and the example of Somalia I have blogged about is another one with the American support for Ethiopia). To change this we need to reform the role of ethics, which Sen generalizes into a discussion of human rights.

The contraposition of opulence and agony makes us question the ethicality of the status quo, and regardless it is hard to change since with the status quo the power goes with the wealth. Jeremy Bentham in 1792 called natural rights “nonsense on stilts” and Sen notes this line of dismissal is still alive today when people question how a right can exist in the absence of legislation. Bentham says a right requires the existence of punitive treatment for those who abrogate them. Sen says the correct way of thinking about this is utility based ethics, not examining the foundational grounds. For him, this means an ethics that makes room for the significance of human rights and human freedom.

If human rights are a legitimate idea, how is it useful for poverty eradication? Moral rights are often the basis of legislation, such as the inalienable rights basis of the American Constitution and Bill of Rights. The Universal Declaration of Human Rights (its 60th anniversary is in 2008) inspired many countries to bring about this legislative change. Quoting Herbert Hart, Sen notes that the concept of a right belongs to morality and is concerned when one person’s action is limited by another – this is what can appropriately be made “the subject of coercive human rules.” So using this Sen provides a motivation for legislation. Sen also points out a motivation for the ethics of human rights through monitoring the behavior of the powerful and governments, like Human Rights Watch, Médecins Sans Frontières, Amnesty International, and many others do.

Sen relates King and Gandhi in their call for peaceful protest, and thus enacting social reform that way. Sen believes religion plays a large part in social reform (Sen is an atheist but King invoked God frequently), but he says the argument does not rest on the religious components. Following King, Sen discusses the story of Jesus and the Good Samaritan and boils it down to the question of how a neighbor is defined. In the story Jesus argues with a lawyer’s limited conception of duty to one’s neighbor using strictly secular reasoning. Jesus tells the lawyer a story of a wounded man in need who was helped eventually by the Good Samaritan: Jesus asks the lawyer, when this is over and the wounded man reflects on it, who was the wounded man’s neighbor? The lawyer answers that the man who helped him is the neighbor, which is Jesus’s point. Using this understanding of the story Sen concludes the motivation to treat others as equals is not what matters – what matters is that in the process a new neighborhood has been created. Sen says this is a common understanding of justice and pervasive since we are linked to each other in myriad (growing) ways. “The boundaries of justice grow ever larger in proportion to the largeness of men’s views.” Shared problems can unit rather than divide.

Sen concludes that no theory of human rights can ignore a broad understanding of human presence and nearness. We are connected through work, trade, science, literature, sympathy, and commitment. This is an inescapably central engagement in the theory of justice. Poverty is a global challenge and there are few non neighbors left in the world today.

To whom to these human rights apply? Obviously everyone. Quoting Martin Luther King’s speech from the Lincoln Memorial in 1939, Sen decries “the fierce urgency of now” to “make good on the promises of democracy” and to make “justice a reality for all of God’s children.”

Crossposted on I&D Blog

Do you Know Where Your News Is? Predictions for 2013 by Media Experts:

Jonathan Zittrain, co-founder of the Berkman Center, is moderating a panel on the future of news at Berkman’s Media Re:public Forum. The panelists were given two minutes and gave us some soundbites.

Paul Steiger is Editor-in-Chief of ProPublica, a non profit with 25 journalists created to fill the gap left by the shrinking newsrooms in the country. He was a Wall Street Journal managing editor for 16 years previously. When he was at the WSJ, he remembers 15% of the budget being allocated to news and the rest to operations, and now at ProPublica more than 60% of the budget is on news. This is due to the web and how easy operations are now. When asked about his vision in 2013, he doesn’t anticipate making money since their mandate is not to sell advertising and remain a nonprofit.

Jonathan Taplin is a Professor at USC Annenberg and a former producer of films with Bob Dylan and Martin Scorsese. He worries 2013 might bring commercial overload and not just an information overload. He agrees with David Weinberger that the struggle will be over meta-data. He sees an advance of the commodizing of freedom – social networks mine information about you even though they seem free. So he sees an eventual FaceBook/MySpace type polarization widely on the web where some users are in an ad free world they pay for and others in a free world full of ads. These become two separate world that don’t interact.

Jennifer Ferro is Assistant General Manager and Executive Producer of Good Food at KCRW. She sees a convergence of devices and platforms where devices become less relevant. She doesn’t think people are going to carry radios and the internet will become pervasive with a backbone of media sites people primarily visit.

Jonathan Krim is Director of Strategic Initiatives of Washingtonpost.Newsweek Interactive. He thinks the traditional story telling model, based on objectivity, will be abandoned and journalists will seek to attribute all points of view to others. He sees the blogosphere, television, and some print pioneers creating spaces where reporters are free to write what they know – where the quality of the reporters is important and considering the other side is important. This means that we will approach something closer to a press that reports along certain lines that will identify them. Krim believes this scenario enhances the credibility of the journalists and allows for wider sourcing and more public participation.

Lisa Williams, of Placeblogger.com, sees shorter job tenure with a greater number of popular journalists rather than a cabal of a few. This gives a wider breadth to the stories and more depth: for example 6000 amputee soldiers have returned from Iraq – but how many have been fitted with prosthetics? Important questions like this would be tough to answer in a traditional newsroom but in 2013 the media will be capable of answering this.

David Cohn, from digidave.org and Newstrust.net, has 2 mantras: 1) the future is open and distributed and 2) journalism is a process not a product. Cohn sees these converging to the question how does the process become more open and distributed? He wants newspapers to be more like a public library in that they are a source of information about your community. He follows ideas in Richard Sambrook’s talk last night in that he wants to content to be open and distributed through networked journalism.

Jon Funabiki is a Professor of Journalism at San Francisco University. He thinks dialog in 2013 will center around our passions. He sees 3 trends: 1) increasing democratic diversity in the US and increasing globalization 2) an explosion of ethnic new media from identity based communities 3) the increasing practice of community based organizations using new media tools like journalistic narrative story telling designed to move services to communities. So he wants to couple old media with new community produced media since it all contributes to the ongoing civic dialog.

Solana Larsen is managing editor of Global Voices and previously a commissioning editor of Open Democracy. She is worried about journalistic integrity – journalists interviewing journalists who are on the scene and reporting secondhand information with an aura of knowledability. She wants journalists to talk to local people and be honest with their audiences about how much they really know about the topic. She thinks in 2013 there will be no foreign correspondents and news will be reported by people who understand the local context and culture.

Crossposted in I&D Blog

Media Re:public Forum Panel on Participatory Media: Defining Success, Measuring Impact

Margaret Duffy is a Professor from the University of Missouri School of Journalism and she is speaking at Berkman’s Media Re:public Forum. She leads a Citizen Media Participation project to create a taxonomy of news categories and get a sense of the state of citizen media via sampling news across the nation. They are interested in where the funding in coming from, the amount of citizen participation, and getting an idea of what the content is. They are also creating a social network called NewNewsMedia.org connecting seekers and posters to bring together people interested in the same sorts of things.

She’s sampled the country in local regions and found that, for example, Richmond Virginia is a hotbed for citizen journalism and blogging and says their methods of connecting to each other are unique. This suggests that blogging and citizen media seems to remain a local phenomenon. Across the country, they were suprised by how the sites were not all that particpatory, for example there isn’t much capability to upload on these sites. She suggests this is because gatekeeping seems very important and blogs tends to be tightly controlled by their authors. They also have seen a lot more linking to outside their sites and many blogs are trying to sell advertisihng (with highly varying levels of success).

The driving force behind the project is the idea that from a social capital standpoint they think that strong community connection make a difference to how to community survives in a democratic process. Her results on the local nature of citizen media suggests a more traditional notion of what a community is. Ethan Zuckerman discusses that community can define itself by local geography or aroudn subject matter and he suggests (referencing the talk below) that we are developing new metric for monetizing site based on reaching the right community and how we define the community is important for the sustainability of websites.

Duffy is followed by Carol Darr, director of the Institute for Politics, Democracy and the Internet (ipdi) at George Washington University. She is discussing the “Media Habits of Poli-fluentials” and building on work from The book The Influentials by Ed Keller and Jon Berry. The idea is that one person in ten tells the other nine how to votes, where to eat, etc. The interesting thing Darr notes is that poli-fluentials (her term) are not elites in the traditional sense but local community leaders and ordinary folk who appear to be knowledgable to their peers. She notes that people who seem to know a lot of people tend to be these poli-fluentials.

In a study she published at the www.ipdi.com the internet users political campaigners had traditionally not focused on are in fact the most active and most connected people in their local community. So now the campaigns and news media understand their audiences differently. If you read a newspaper or watch Sunday morning talk shows and PBS you are more likely to be a poli-fluential (about doubling your odds). Interestingly, purchashing political paraphenalia online increases your odds of being a poli-fluential about 5-fold, as with joining political groups and actively emailing representatives. But the kicker is that people who are self-declared independents who made a political contribution are 80 times more likely to be a poli-fluential than not.

Can we find sustainable funding models for citizen journalism? She suggests the poli-fluentials are the ones to target by advertisers since their opinions are those that filter out influentially to the community and where you get the most band for your advertising buck.

ini the panel discussions following the talk, Marc Cooper from the HuffingtonPost and a USC professor comments on how much it matters who is reading his site. He wants to maximize this number, rather than target the poli-fluentials. Impact is whether people are reading the stories, whether they filter into the broader media and whether they spawn debate. Clint Ivy from Fox Interactive Media suggests that you need to decide whether your goal is to make money or not and the appropriate metric flows from this. He uses the number of comments per post to measure influence, others might just decide whether or not they get a sense a satisfaction from blogging. Dan Gillmor, another Berkman fellow and Director of the Walter Cronkite School of Journalism and Mass Communication at ASU. reframes the problem as one of finding the right things to measure – how do you get a handle on the community mailing list that never bubbles out beyond the community. He thinks this things are enormously valuable and get overlooked. Ethan Zuckerman of GlobalVoices and another Berkman fellow is concerned about agenda setting and whether the right stories are coming up onto the front page and he is worried about the fact that the numbers tend to reflect not influence but whether the stories are important and underheard. Is is easy to get many hits on your blogs by picking a sensational story but having tens of hundreds of the right readers reading the right story is tough to measure. Marc Cooper questions whether any of these questions are new in the digital age or just a rehashing of the same question journalists have always faced.

Crossposted at I&D Blog

John Kelly: Parsing the Political Blogosphere

John Kelly is a doctoral student a Columbia’s School of Communications, a startup founder (Morningside Analytics), as well as doing collaborative work with Berkman. He’s speaking Berkman’s Media Re:public Forum.

Kelly says he takes an ecosystem approach to studying the blogosphere since he objects to dividing research on society into cases and variables because it is an interconnection whole. This isn’t right and basic statistical methods that use variables and cases and designed specifically to take interconnections into account. What he is doing with the research he presents today is using a graphical tool to present descriptions of the blogosphere.

Kelly shows a map of the entire blogosphere and the outlinks from the blogosphere. Every dot is a blog and any blogs that are linked are pulled together – so the map itself looks like clusters and neighborhoods of blogs. The plot seems slightly clustered but there is an enormous amount of interlinking (my apologies for not posting pictures – I don’t think this talk is online). In the outlinks maps to links from blogs to other sites – the New York Times is most frequently linked to and thus the largest dot on the outlinks map.

Kelly compares maps for 5 different language blogospheres: English, Persian, Russian, Arabic, and Scandinavian languages. Russian has very separate clusters and other languages get progressively more interconnected. In the Persian example, Kelly has found distinct clusters of ex-pat cloggers, poetry, and religious conservative bloggers concerned about the 12th Inam, as well as clusters of modern and moderately traditional religious and political bloggers. Kelly suggests this is a more disparate and discourse oriented picture than we might have thought.

In the American blogosphere Kelly notes that bloggers tend to link overwhelmingly to other blogs that are philosophically aligned with their own blog. He shows an interesting plot of Obama, Clinton, McCain blogopsheres’ linking patterns to other sites such as thinktanks and particular YouTube videos.

Kelley also maps a URL’s salience: main stream media articles peak quickly and are sometimes over taken by responses, but Wikipedia article keep getting consistent hits over time.

The last plot he shows is a great one of the blogs of the people attending this conference (and their organizations): there are 5 big dots representing how much people have blogged about the people – main stream media sites are the 5 big dots. Filtering out of those gives GlobalVoices as the blogs people mainly link to.

Crossposted on I&D Blog

David Weinberger: How new technologies and behaviors are changing the news

David Weinberger is a fellow and colleague of mine at the Berkman Center and is at Berkman’s Media Re:public Forum discussing the difference the web is making to journalism: “what’s different about the web when it comes to media and journalism?”

Weinberger is concerned with how we frame this question. He prefers ‘ecosystem’ rather than ‘virtue of discomfort’ since this gets at the complexity and interdependence in online journalism. But the ecosystem analogy is too apt and too comforting and all-encompassing so he pushes further. He doesn’t like the ‘pro-amateur’ analogy since it focuses too much on money as the key difference in web actors, and yet somehow seems to understate the vast disparity in money and funding. The idea of thinking of news as creating a better informed citizenry so that we get a better democracy doesn’t go far enough – Weinberger notes that people read the news for more reasons than this.

So he settles on ‘abundance’ as a frame due to the fact that control doesn’t scale which is something being address currently with online media. “Adundance of crap is scary but abundance of good stuff is terrifying!” The key question is how to deal with this. We are no longer in a battle over the front page since other ways of getting information are becoming more salient. For example, Weinberger notes that “every tag is a front page” and email recommendations often become our front page. He sees this translating into a battle over metadata – the front page is metadata, authority is metadata – and we are no longer struggling over content creation. So we create new tools to handle metadata – in order to show each other what matters and how it matters. Tools such as social networks and the sematic web. All these tools unsettle knowledge and meaning (knowledge and meaning that has not been obvious but was always there).

Crossposted on I&D Blog

Implementing a Human Rights Policy at the World Bank

Galit Safarty gave a talk at Harvard Law School today titled: Why Culture Matters in International Institutions: The Marginality of Human Rights at the World Bank. Sarfaty obtained her JD from Yale and is a lawyer and anthropologist. She is a visiting fellow at Harvard Law School’s Human Rights Program and writing her dissertation based on 4 years of field work at the World Bank. She is studying why no mandate for human rights has been incorporated into the organizational culture at the Bank.

She sees the reason as resulting from a clash in ideology between the human rights people, which are largely the lawyers, and economists. Economists dominate the bank, hold most powerful positions, and have a unique and prestigious research group. Sarfaty also notes that the articles of agreement for the World Bank explicitly states that only economic considerations can be taken into account in World Bank decision making. The World bank is the largest lender to developing countries at $30 billion per year. Sarfaty notes that their mission is poverty reduction and this gives a crack through which supporters for a World bank policy on human rights can work. She suggests three reasons she expects the World Bank to have implemented a human rights policy:

1) peer institutions like UNICEF, UNDP, DFID, have one,
2) the Bank is subject to external pressure by NGOs and internal pressure from employees,
3) even banks in the private sector have human rights frameworks. ICS (the World Bank commercial banking arm) has a human rights framework based on risk management.

Sarfaty thinks the World banks legal mandate has become less salient in the recent years, but now bureaucrats stand in the way.

She has conducted about 70 interviews over 4 years at the WB in Washington DC, and found that professional identity is the source of conflict within the bureaucracy, and economists dominate at the Bank. Within the Bank lawyers are seen as technocrats that aren’t directly involved in projects. The legal department has a culture of secrecy because of this.

She concludes that the goal is to frame human rights issues for economists, rather than playing to the perception that it is a political issue. So the idea is to frame human rights goals for economists: presenting empirical data as to how they advance human development and thus is a relevant issue for the Bank and within its poverty eradication mandate. Also, the Bank is creating a new indicator that measure human rights performance not just legal compliance with contracts. Another avenue she suggests is exploiting the rigidness of some of the guidelines for working with countries – human rights could be a lever to incrementally convince the Bank to be more flexible, which not a constraint on lending.

Sarfaty makes this sounds like a tough road, especially when she explains that no explicit policy on human rights has even been put forward at the World Bank because the board of directors has seats held by China and Saudi Arabia. She sees the only option as working through the staff level.

Crossposted on I&D Blog