Archive for the 'Conferences' Category

Science and Video: a roadmap

Once again I find myself in the position of having collected slides from talks, and having audio from the sessions. I need a simple way to pin these together so they form a coherent narrative and I need a common sharing platform. We don’t really have to see the speaker to understand the message but we needs the slides and the audio to play in tandem with the slides changing at the correct points. Some of the files are quite large: slides decks can be over 100MB and right now the audio file I have is 139MB (slideshare has size limits that don’t accomodate this).

I’m writing because I feel the messages are important, and need to be available to a wider audience. This is often our culture, our heritage, our technology, our scientific knowledge and our shared understanding. These presentations need to be available not just on principled open access grounds, but it is imperative that other scientists hear these messages as well, amplifying scientific communication.

At a bar the other night a friend and I came up with the idea of S-SPAN: a C-SPAN for science. Talks and conferences could be filmed and shared widely on an internet platform. Of course these platforms exist and some even target scientific talks but the content also needs to be marshalled and directed onto the website. Some of the best stuff I’ve even seen has floated into the ether.

So, I make an open call for these two tasks: a simple tool to pin together slides and audio (and sides and video), and an effort to collate video from scientific conference talks and film them if it doesn’t exist, all onto a common distribution platform. S-SPAN could start as raw and underproduced as C-SPAN, but I am sure it would develop from there.

I’m looking at you, YouTube.

My Symposium at the AAAS Annual Meeting: The Digitization of Science

Yesterday I held a symposium at the AAAS Annual Meeting in Washington DC, called “The Digitization of Science: Reproducibility and Interdisciplinary Knowledge Transfer,” that was intended to bring attention to how massive computation is changing the practice of science, particularly the lack of reproducibility of published computational scientific results. The fact is, most computational scientific results published today are unverified and unverifiable. I’ve created a page for the event here, with links to slide decks and abstracts. I couldn’t have asked for a better symposium, thanks to the wonderful speakers.

The first speaker was Keith A. Baggerly, who (now famously) tried to verify published results in Nature Medicine and uncovered a series of errors that led to the termination of clinical trials at Duke that were based on the original findings, and the resignation of one of the investigators (his slides). I then spoke about policies for realigning the IP framework scientists are under with their longstanding norms, to permit sharing of code and data (my slides). Fernando Perez described how computational scientists can learn about not only code sharing, quality control, and project management from the Open Source Software, but how they have in fact developed what is in effect a deeply successful system of peer review for code. Code is verified line by line before incorporated into the project, and there are software tools to enable the communication between reviewer and submitted, down to the line of code (his slides).

Michael Reich then presented GenePattern, an OS independent tool developed with Microsoft for creating data analysis pipelines and incorporating them into a Word doc. Once in the document, tools exist to click and recreate the figure from the pipeline and examine what’s been done to the data. Robert Gentlemen advocated the entire research paper as the unit of reproducibility, and David Donoho presented a method for assigning a unique identifier to figures within the paper, that creates a link for each figure and permits its independent reproduction (the slides). The final speaker was Mark Liberman, who showed how the human language technology community had developed a system of open data and code in their efforts to reduce errors in machine understanding of language (his slides). All the talks pushed on delineations of science from non-science, and it was probably best encapsulated with a quote Mark introduced from John Pierce, a Bell Labs executive in 1969, how “To sell suckers, one uses deceit and offers glamor.”

There was some informal feedback, with a prominent person saying that this session was “one of the most amazing set of presentations I have attended in recent memory.” Have a look at all the slides and abstracts, including links and extended abstracts.

Update: Here are some other blog posts on the symposium: Mark Liberman’s blog and Fernando Perez’s blog.

Code Repository for Machine Learning: mloss.org

The folks at mloss.org — Machine Leaning Open Source Software — invited a blog post on my roundtable on data and code sharing, held at Yale Law School last November. mloss.org’s philosophy is stated as:

“Open source tools have recently reached a level of maturity which makes them suitable for building large-scale real-world systems. At the same time, the field of machine learning has developed a large body of powerful learning algorithms for a wide range of applications. Inspired by similar efforts in bioinformatics (BOSC) or statistics (useR), our aim is to build a forum for open source software in machine learning.”

The site is excellent and worth a visit. The guest blog Chris Wiggins and I wrote starts:

“As pointed out by the authors of the mloss position paper [1] in 2007, “reproducibility of experimental results is a cornerstone of science.” Just as in machine learning, researchers in many computational fields (or in which computation has only recently played a major role) are struggling to reconcile our expectation of reproducibility in science with the reality of ever-growing computational complexity and opacity. [2-12]

In an effort to address these questions from researchers not only from statistical science but from a variety of disciplines, and to discuss possible solutions with representatives from publishing, funding, and legal scholars expert in appropriate licensing for open access, Yale Information Society Project Fellow Victoria Stodden convened a roundtable on the topic on November 21, 2009. Attendees included statistical scientists such as Robert Gentleman (co-developer of R) and David Donoho, among others.”

keep reading at http://mloss.org/community/blog/2010/jan/26/data-and-code-sharing-roundtable/. We made an effort to reference efforts in other fields regarding reproducibility in computational science.

What's New at Science Foo Camp 2009

SciFoo is a wonderful annual gathering of thinkers about science. It’s an unconference and people who choose to speak do so. Here’s my reaction to a couple of these talks.

In Pete Worden’s discussion of modeling future climate change, I wondered about the reliability of simulation results. Worden conceded that there are several models doing the same predictions he showed, and they can give wildly opposing results. We need to develop the machinery to quantify error in simulation models just as we routinely do for conventional statistical modeling: simulation is often the only empirical tool we have for guiding policy responses to some of our most pressing issues.

But the newest I saw was Bob Metcalfe’s call for us to imagine what to do with the coming overabundance of energy. Metcalfe likened solving energy scarcity to the early days of Internet development: because of the generative design of Internet technology, we now have things that were unimagined in the early discussions, such as YouTube and online video. According to Metcalfe, we need to envision our future as including a “squanderable abundance” of energy, and use Internet lessons such as standardization and distribution of power sources to get there, rather than building for energy conservation.

Cross posted on The Edge.

Bill Gates to Development Researchers: Create and Share Statistics

I was recently in Doha, Qatar, presenting my research on global communication technology use and democratic tendency at ICTD09. I spoke right before the keynote, Bill Gates, whose main point was that when you engage in a goal-oriented activity, such as development, progress can only be made when you measure the impact of your efforts.

Gates paints a positive picture, measured by deaths before age 5. In the 1880′s he says about 30% of children died before their 5th birthday in most countries, and this gradually moved to 20 million in 1960 and then 10 million in 2006. Gates postulates this is due to rising income levels (40% of decrease), and medical innovation such as vaccines (60% of decrease).

This is an example of Gates’ mantra: you can only improve what you can measure. For example, an outbreak of measles tells you your vaccine system isn’t functioning. In his example about childhood deaths, he says we are getting somewhere here because we are measuring the value for money spent on the problem.

Gates thinks the wealthy in the world need to be exposed to these problems ideally through intermingling, or since that is unlikely to happen, through statistics and data visualization. Collect data, then communicate it. In short, Gates advocates creating statistics through measuring development efforts, and changing the world by exposing people to these data.

Stuart Shieber and the Future of Open Access Publishing

Back in February Harvard adopted a mandate requiring its faculty member to make their research papers available within a year of publication. Stuart Shieber is a computer science professor at Harvard and responsible for proposing the policy. He has since been named director of Harvard’s new Office for Scholarly Comminication.

On November 12 Shieber gave a talk entitled “The Future of Open Access — and How to Stop It” to give an update on where things stand after the adoption of the open access mandate. Open access isn’t just something that makes sense from an ethical standpoint, as Shieber points out that (for-profit) journal subscription costs have risen out of proportion with inflation costs and out of proportion with the costs of nonprofit journals. He notes that the cost per published page in a commercial journal is six times that of the nonprofits. With the current library budget cuts, open access — meaning both access to articles directly on the web and shifting subscriptions away from for-profit journals — is something that appears financially unavoidable.

Here’s the business model for an Open Access (OA) journal: authors pay a fee upfront in order for their paper to be published. Then the issue of the journal appears on the web (possibly also in print) without an access fee. Conversely, traditional for-profit publishing doesn’t charge the author to publish, but keeps the journal closed and charges subscription fees for access.

Shieber recaps Harvard’s policy:

1. The faculty member grants permission to the University to make the article available through an OA repository.

2. There is a waiver for articles: a faculty member can opt out of the OA mandate at his or her sole discretion. For example, if you have a prior agreement with a publisher you can abide by it.

3. The author themselves deposits the article in the repository.

Shieber notes that the policy is also because it allows Harvard to make a collective statement of principle, systematically provide metadata about articles, it clarifies the rights accruing to the article, it allows the university to facilitate the article deposit process, it allows the university to negotiate collectively, and having the mandate be opt out rather than opt in might increase rights retention at the author level.

So the concern Shieber set up in his talk is whether standards for research quality and peer review will be weakened. Here’s how the dystopian argument runs:

1. all universities enact OA policies
2. all articles become OA
3. libraries cancel subscriptions
4. prices go up on remaining journals
5. these remaining journals can’t recoup their costs
6. publishers can’t adapt their business model
7. so the journals and the logistics of peer review they provide, disappear

Shieber counters this argument: 1 through 5 are good because journals will start to feel some competitive pressure. What would be bad is if publishers cannot change their way of doing business. Shieber thinks that even if this is so it will have the effect of pushing us towards OA journals, which provide the same services, including peer review, as the traditional commercial journals.

But does the process of getting there cause a race to the bottom? The argument goes like this: since OA journals are paid by the number of articles published they will just publish everything, thereby destroying standards. Shieber argues this won’t happen because there is price discrimination among journals – authors will pay more to publish in the more prestigious journals. For example, PLOS costs about $3k, Biomed Central about $1000, and Scientific Publishers International is $96 for an article. Shieber also makes an argument that Harvard should have a fund to support faculty who wish to publish in an OA journal and have no other way to pay the fee.

This seems to imply that researchers with sufficient grant funding or falling under his proposed Harvard publication fee subsidy, would then be immune to the fee pressure and simply submit to the most prestigious journal and work their way down the chain until their paper is accepted. This also means that editors/reviewers decide what constitutes the best scientific articles by determining acceptance.

But is democratic representation in science a goal of OA? Missing from Shieber’s described market for scientific publications is any kind of feedback from the readers. The content of these journals, and the determination of prestige, is defined solely by the editors and reviewers. Maybe this is a good thing. But maybe there’s an opportunity to open this by allowing readers a voice in the market. This could done through ads or a very tiny fee on articles – both would give OA publishers an incentive to respond to the preferences of the readers. Perhaps OA journals should be commercial in the sense of profit-maximizing: they might have a reason to listen to readers and might be more effective at maximizing their prestige level.

This vision of OA publishing still effectively excludes researchers who are unable to secure grants or are not affiliated with a university that offers a publication subsidy. The dream behind OA publishing is that everyone can read the articles, but to fully engage in the intellectual debate quality research must still find its way into print, and at the appropriate level of prestige, regardless of the affiliation of the researcher. This is the other side of OA that is very important for researchers from the developing world or thinkers whose research is not mainstream (see, for example, Garrett Lisi a high impact researcher who is unaffiliated with an institution).

The OA publishing model Shieber describes is a clear step forward from the current model where journals are only accessible by affiliates of universities who have paid the subscription fees. It might be worth continuing to move toward an OA system where, not only can anyone access publications, but any quality research is capable of being published, regardless of the author’s affiliation and wealth. To get around the financial constraints one approach might be to allow journals to fund themselves through ads, or provide subsidies to certain researchers. This also opens up the idea of who decides what is quality research.

A2K3: Connectivity and Democratic Ideals

Also in the final A2K3 panel, The Global Public Sphere: Media and Communication Rights, Natasha Primo, National ICT policy advocacy coordinator for the Association for Progressive Communications, discusses three questions that happen to be related to my current research. 1) Where is the global in the global public sphere? 2) Who is the public in the global public sphere? and 3) How to we get closer to the promise of development and the practice of democratic values and freedom of expression?

She begins with the premise that we are in an increasingly interconnected world, in economic, political, and social spheres, and you will be excluded if you are not connected. She also asserts the premise that connection to the internet can lead to the opening of democratic spaces and – in time – a true global public sphere.

Primo, like Ó Siochrú in my blog post here, doesn’t see any global in global public sphere. She thinks this is just a matter of timing, and not a systematic problem. She notes that the GSM organization predicts 5 billion people on the GSM network by 2015, whereas we now have 1 of 6 billion connection to the internet> note that Primo believes internet access will come through the cell phone for many people who are not connected today. She refers us to Richard Heeksproposal for a Blackberry-for-development. Heeks is professor and chair of the Development Informatics Department at the University of Manchester. But Primo sees the cost as the major barrier to connectivity among LCDs and thinks this will abate over time.

With regard to the cost of connectivity, she notes that Africa has a 10% internet subscription rate versus in Asia-Pacific and 72% in Europe. South Africa is planning an affordable broadband campaign: to have some facilities declared ‘essential’ to make them available to the public at cost to the service providers. This comes from the A2K idea of partnership for higher education in Africa – African universities are to have cheaper access. She also sees authoritarian behavior by states as another obstacle to connectivity. She cites research by our very own OpenNet Initiative that 24 of 40 countries studied are filtering the internet and using blocking tools to prevent freedom of expression. This is done via blocking blogging sites and YouTube. She is worried about how this behavior by governments impacts peoples’ behavior when they are online. She notes surveys that show two extreme reactions: people either practice substantial selfcensorship or put their lives on the line for the right to express an opinion.

Primo notes the cultural obstacles to the global public sphere. She relates a story that some groups are not accustomed to hearing opinions that diverge from their own and will, innocently, flag them as inappropriate content. Dissenting opinions come back online after a short amount of time, but with the delay dialogue can be lost.

A2K3: Communication Rights as a Framework for Global Connectivity

In the last A2K3 panel, entitled The Global Public Sphere: Media and Communication Rights, Seán Ó Siochrú made some striking statements based on his experience building local communication networks in undeveloped areas of LCDs. He states that the global public sphere is currently a myth, and what we have now is elites promoting their self-interest. He criticizes the very notion of the global public sphere – he wants a more dynamic and broader term that reflects the deeper issues involved in bringing about such a global public sphere. He prefers to frame this issue in terms of communication rights. By this he means the right to sek and receive ideas, generate ideas and opinions of one’s own, speaks these ideas, have a right to be heard, and a right to have others listen. These last two rights Ó Siochrú dismisses as trivial but I don’t see that they are. Each creates a demand on others’ time that I don’t see how to effectuate within the framework of respect for individual automony Belkin elucidated in his keynote address and discussed in my recent blog post and on the A2K blog.

Ó Siochrú also makes an interesting point that if we are really interested in facilitating communication and connection between and by people who have little connectivity today, we are best to concentrate on technologies such as the radio, email, mobile phones, the television, or whatever works at the local level. He eschews blogs, and the internet, as the least acessible, least affortable, and the least usable.

A2K3: Opening Scientific Research Requires Societal Change

In the A2K3 panel on Open Access to Science and Research, Eve Gray, from the Centre for Educational Technology, University of Cape Town, sees the Open Access movement as a real societal change. Accordingly she shows us a picture of Nelson Mandela and asks us to think about his release from prison and the amount of change that ushered in. She also asks us to consider whether or not Mandela is an international person or a local person. She sees a parallel with how South African society changed with Mandela and the change people are advocation toward open access to research knowledge. She shows a worldmapper.org map of countries distorted by the amount of (copyrighted) scientific research publications. South Africa looks small. She blames this on South Africa’s willingness to uphold colonial traditions in copyright law and norms in knowledge dissemination. She says this happens almost unquestioningly, and in South Africa to rise in the research world you are expected to publish in ‘international’ journals – the prestigious journals are not South African, she says (I am familiar with this attitude from my own experience in Canada. The top American journals and schools were considered the holy grail. When I asked about attending a top American graduate school I was laughed at by a professor and told that maybe it could happen, if perhaps I had an Olympic gold medal.) She states that for real change in this area to come about people have to recognize that they must mediate a “complex meshing” of policies: at the university level, and the various government levels, norms and the individual scientist level… just as Mandela had to mediate a large number of complex policies at a variety of different levels in order to bring about the change he did.

Legal Barriers to Open Science: my SciFoo talk

I had an amazing time participating at Science Foo Camp this year. This is a unique conference: there are 200 invitees comprising some of the most innovative thinkers about science today. Most are scientists but not all – there are publishers, science reporters, scientific entrepreneurs, writers on science, and so on. I met old friends there and found many amazing new ones.

One thing that I was glad to see was the level of interest in Open Science. Some of the top thinkers in this area were there and I’d guess at least half the participants are highly motivated by this problem. There were sessions on reporting negative results, the future of the scientific method, reproducibility in science. I organized a session with Michael Nielsen on overcoming barriers in open science. I spoke about the legal barriers and O’Reilly Media has made the talk available here.

I have papers forthcoming on this topic you can find on my website.

A2K3 Kaltura Award

I am honored and humbled to win the A2K3 Kaltura prize for best paper. Peter Suber posts about it here and gives the abstract. His post also includes a link to a draft of the paper, which can also be found here: Enabling Reproducible Research: Open Licensing For Scientific Innovation. I’d love comments and feedback although please be aware that since the paper is forthcoming in the International Journal of Communications Law and Policy it will very likely undergo changes. Thank you to Kaltura.com and the entire A2K3 committee. I’m very happy to be here in Geneva and enjoying every minute. :)

A2K3: Technological Standards are Public Policy

Laura DeNardis, executive director of Yale Law School’s Information Society Project, spoke during the A2K3 panel on Technologies for Access. She makes the point that many of our technological standards are being made behind closed doors and by private, largely unaccountable, parties such as ICANN, ISO, the ITU, and other standards bodies. She advocates the concept of Open Standards, which she defines in a three-fold way as open in development, open in implementation, and open in usage. DeNardis worries that without such protections in place stakeholders can be subject to a standard they were not a party to, and this can affect nations in ways that might not be beneficial to them, particularly in areas such as civil rights, and especially for less developed countries. In fact, Nnenna Nwakanma in the audience comments that even when countries appears to be involved, their delegations are often comprised of private companies and are not qualified. For example, she says that there are only three countries in Africa that have people with the requisite techinical expertise in such state standards councils and that the involvment process is far from transparent. DeNardis also mentions the Dynamic Coalition on Open Standards designed to preserve the open architecture of the internet, with the Yale ISP is involved in advocacy at the Internet Governance Forum. DeNardis powerfully points out that standards are very much public policy, as much as the regulation we typically think of as public policy.

A2K3: A World Trade Agreement for Knowledge?

Thiru Balasubramanian, Geneva Representative for Knowledge Ecology International presents a proposal (from a forthcoming paper by James Love and Manon Ress) for a WTO treaty on knowledge (so far all WTO agreements extend to private goods only). Since information is a public good (nonrival and nonexcludable), we will have a “market failure” if single countries act alone: hence the undersupply of global public goods. The WTO creates binding agreements and thus such an agreement for public goods such as knowledge creates large collective benefits and high costs to acting against them. Such a WTO agreement would outline and influence norms. Why do this within the WTO? There are strong enforcement mechanisms here. Are we really undersupplying open and free knowledge? I can think of several scientific examples. Balasubramaniam doesn’t dig in to what such an agreement would look like and seems quite complex. Thinking about this might provide a coherent framework for approaching free information issues globally.

A2K3: Access to Knowledge as a Human Right

Building on the opening remarks, the second panel addresses Human right and Access to Knowledge. Caroline Dommen, director of 3D, an advocacy group promoting human rights consideration in trade agreements, emphasizes the need for metrics: how can we tell how open countries are? She suggests borrowing from the experience with human rights measurement. For example measuring the availability of a right, nondiscrimination in access, economic access (is it affordable?), acceptability or quality or the available good. She also suggests using the 4A human rights approach of 1) respect 2) protect and 3) fulfill the rights. There are corollary obligations: 1) non-discrimination 2) adequate process (including redress of violated rights) 3) participation 4) effective remedy.

Marisella Ouma, Kenyan team researcher for the African Copyright and Access to Knowledge Project, says that most African countries have had copyright laws since independence (starting with Ghana in 1957). She is concerned about the educational aspect of access to knowledge and related results of the educational materials access index: the highest ranking is Egypt and the lowest is Mozambique. So, why? What are the issues? Ouma notes that these countries have the laws but not strong policies: she asserts they need a copyright policy that acknowledges the basic fundamental right to education so there isn’t a conflict between property rights and the right to access educational information. She is concerned that people don’t understand copyright law and this makes advocacy of their rights difficult. She is also concerned that policy is not comprehensive enough: For example in Kenya or Uganda, the education policy is limited to basic education. She also describes the sad situation of there being billions of dollars available to build libraries but no money to stock them with information. Something is really wrong here. She notes that wireless internet is important for this, and how many people really have access? So how do they access the knowledge? she asks.

A2K3: Tim Hubbard on Open Science

In the first panel at A2K3 on the history, impact, and future of the global A2K movement, Tim Hubbard, a genetics researcher, laments that scientists tend to carry out their work in a closed way and thus very little data is released. In fact he claims that biologists used to deliberately mess up images so that they could not be reproduced! But apparently journals are more demanding now and this problem has largely been corrected (for example Nature’s 2006 standards on image fraud). He says that openness in science needs to happen before publication, the traditional time when scientists release their work. But this is a tough problem. Data must be released in such a way that others can understand and use it. This parallels the argument made in the opening remarks about the value of net neutrality as preserving an innovation platform: in order for data to be used it must be open in the sense that it permits further innovation. He says we now have Open Genome Data but privacy issues are pertinent: even summaries of the data can be backsolved to identify individuals. He asks for better encryption algorithms to protect privacy. In the meantime he proposes two other solutions. We could just stop worrying about the privacy of our genetic data, just like we don’t hide our race or gender. Failing that, he wants to mine the UK’s National Health Service’s patient records through an “honest broker” which is an intermediary that runs programs and scripts on the data that researchers submit. The data are hidden from the researcher and only accessed through the intermediary. Another problem this solves is the enormity of the released data that can prevent interested people from moving the data or analyzing it. This has broad implications as Hubbard points out – the government could access their CCTV video recordings to find drivers who’ve let their insurance lapse, but not track other possibly privacy violating aspects of drivers’ visible presence on the road. Hubbard is touching on what might be the most important part of the Access to Knowledge movement – how to make the access meaningful without destroying incentives to be open.

Access to Knowledge 3: Opening Remarks

I’m at my first Access to Knowledge conference in Geneva and I’ve never felt so important. Walking to the Centre International de Conférences in Geneva I passed the UN High Commission for Refugees and I’m sitting in an enormous tiered conference room with translation headphones and plush leather chairs. Maybe I’m easily impressed, but this is really my first exposure to influencing policy through any means other than academic idea generation and publication. A2K3 is held literally across the street from the World Intellectual Property Organization‘s headquarters and the focus is changing the global intellectual policy landscape.

So that means there are more lawyers and activists here than I am used to seeing at the usual academic conferences. The introductory remarks reflect this: Sisule Musungu lists the multitude of groups involved such as eiFL, EFF, OSI, for example. Google and Kaltura are the only corporate sponsors. Laura DeNardis, the executive director of Information Society Project at Yale (the group primarily responsible for A2K3) is giving opening remarks. Laura makes the point that technical standards contain deep political stances on knowledge sharing and dissemintation so the debate isn’t just about regulation any more. This means A2K is not just about laws and treaties, but also about the nature of the communciation technologies. Many of our discussions about net neutrality at Berkman note this fact, and in followup remarks Jack Balkin, the founding directory of the Yale ISP, makes this observation. He states that the A2K movement brings attention to much of International Trade Law that flies under most people’s radars, especially how it impacts the free flow of information, particularly on developing countries. A2K is at core about justice and human rights, since more and more wealth creation is coming from information tools in our information-driven world. This is clearly true: think of the success and power of Google – an information company. A2K is at least in part a reaction to the increasingly strong correlation between wealth and access to information. Balkin relates the FCC ruling preventing Comcast from discriminating between packets based on application or content, meaning that this movement is really about the decentralization of innovation: he states that without net neutrality innovation would be dominated by a small number of firms who would only allow innovations that benefit them directly. The A2K movement is about bringing more minds to solve our greatest problems, and this also engenders a debate about control, most deeply the control people can effect on their own lives: “will people be the master’s of themselves or will they be under the control of others?” The internet is a general purpose tool facilitating communication however people see fit, so the internet can be understood as a commons in that we can use it and build on it for our own self-determined purposes.

Vacations or "Vacations" :)

I’m here at the Global Voices Summit in Budapest and I just listened to a panel on Rising Voices, a group within Global Voices dedicated to supporting the efforts of people traditionally underrepresented in citizen media. (See their trailer here). At the end of the panel, the question was asked ‘how can we help?’ The answer was perhaps surprising: although money is always welcome what is needed is skills. Specifically, people with web design or IT skills can come and stay with a blogging community for a week or two and teach people how to do things like design a web page, display their wares online, essentially support people in computer use… So, it occurred to me that I know many people for whom travel and learning are very important, who are both skilled in IT and would find an enormous satisfaction from having a purpose to their travel. I can put you in touch with people who might appreciate your skills, or you can reach Rising Voices directly. Another group that’s similar is spirit and might be able to facilitate this is Geek Corps.

Amartya Sen at the Aurora Forum at Stanford University: Global Solidarity, Human Rights, and the End of Poverty

This is a one day conference to commemorate Martin Luther King’s “The Other America” in his 1967 speech at Stanford, and heed that speech’s call to create a more just world.

Mark Gonnerman, director of the Aurora Forum introduces the event by noting that economic justice is the main theme of King’s legacy. He references King’s 1948 paper where he lays out his mission as a minister, in which his goal is to deal with unemployment, slums, and economic insecurity. He doesn’t mention civil rights. So the effect of Rosa Parks was to turn him in a difference direction from his original mission, to which he returned, which is the gulf between rich and poor. Gonnerman reminds us of the interdependence of global trade and how, even before we leave the house for work, we have used products from all parts of the globe, rich and poor. He quotes King that the agony of the poor enriches the rest.

Thomas Nazario, founding director of The Forgotten International, outlines the face of poverty. He lists the 5 problems in the UN Millennium Report as the charge for the coming generation:

1. global warming
2. world health, including basic health and pandemic avoidance
3. war and nuclear proliferation
4. protection of human rights
5. world poverty

He describes world poverty in two ways: the first is by focusing on the gap between rich and poor. He says there are about 1000 billionaires and claims their money could provide services to half the people on Earth. The second way is to focus on the suffering associated with poverty. Nazario shows us some compelling images of poverty and busts some myths: children do go through garbage and fight rats and other vermin (usually dying before age 5); impoverished people tend to live around rivers since the riverbank is common land since it floods regularly; images of Ethiopia in the 1980′s war, conflict and famine (he notes that when there is extreme poverty, there is extreme fragility of life – any perturbation in the environment will cause death). He says 6 million children die before the age of 5 of hunger and lack of medical care. He also busts the myth that most of the poverty in the world is in Africa – it is in Asia, especially in India. There are 39 million street children in the world, often living in sewers. Of course, poverty is a cause of illiteracy not only because of the cost of education but because the impoverished children usually work to survive.

Amartya Sen is Lamont Professor and Professor of Economics and HIstory, Harvard University. He is a 1998 nobel prize winner in economics and I wrote a book review here of his book _Development as Freedom_. His talk has two components: he speaks first about global poverty and next about human rights. He begins by noting that hope for humanity, as Martin Luther King emphasized, is essential for these topics. Sen hopes the easily preventable deaths of millions of children is not an inescapable human condition and the fatalism about this in the developed world recedes. He also takes on the anti-globalization viewpoint by noting that globalization can be seen as a great contributor to world wealth. He insists globalization is a key component to reform, as there is an enormous positive impact to bringing people together, but the sharing of the spoils needs to be more equitable. Sen advocates a better understanding of economics to help us reform world development institutions, but with a caveat: “a market is as good as the company it keeps.” By this he means that circumstances such as the current conditions governing the distribution of resources or the ability of people to enter market transactions for example, depend on things such as the availability of healthcare and the existence of patents and contract laws conducive to trade.

Sen distinguishes short run and long run policies. In the long run the goal is to keep unemployment low in all countries (so for example he advocated government help in training and job location for Americans whose jobs have become obsolete due to technological progress). In the short run it is essential to have an adequate system of social safety nets that provide a minimum income, healthcare, and children’s schooling (which has long run effects of people’s adaptability in the workforce). Sen eschews economic stagnation and the rejection of economic reform.

Sen is very concerned that the fruits of globalization are not being justly shared and, even though globalization does bring economic benefit for all, he sees this inequality as the root of poverty. He also warns people not to rely on “the market outcome” as a way of washing your hands of the problem since the outcome of the market relies on a number of factors, such as resource ownership patterns, various rules of operation (like antitrust and patent laws), that will give different prices and different income equality.

Sen, consistent with his hopeful theme, notes important things subject to reform and change:

1. an adequately strong global effort to combat lack of education and healthcare
2. improving existing patent laws and reduction of arms supply

For the first point, there is a need for further worldwide cooperation to combat illiteracy and provide other social services. Sen suggests immediate remedies such as halting the repression of exports from poor countries, and other longer term remedies like reconsidering the 1940′s legacy of global institutions such as the UN, and reforming patent systems that prevent getting drugs to poor countries. After all, understanding and modifying incentive structures is “what economics is supposed to be about.” Continuing the second point, Sen believes the globalized trade in arms causes regional tension and global tension from the trade. This isn’t a problem confined to poor countries, on the contrary, the G8 consistently sell more than 80% of arms exports (with about 2/3rd of American arms exports going to developing countries). The Security Council of the G8 were also responsible for more than 80% of the global arms trade (witness this issue has never been discussed in the Security Council). There is a cascade effect here – warlords can rely on American or Russian support for their subversion of economic order and peace (Sen mentions Mobutu as a case in point and the example of Somalia I have blogged about is another one with the American support for Ethiopia). To change this we need to reform the role of ethics, which Sen generalizes into a discussion of human rights.

The contraposition of opulence and agony makes us question the ethicality of the status quo, and regardless it is hard to change since with the status quo the power goes with the wealth. Jeremy Bentham in 1792 called natural rights “nonsense on stilts” and Sen notes this line of dismissal is still alive today when people question how a right can exist in the absence of legislation. Bentham says a right requires the existence of punitive treatment for those who abrogate them. Sen says the correct way of thinking about this is utility based ethics, not examining the foundational grounds. For him, this means an ethics that makes room for the significance of human rights and human freedom.

If human rights are a legitimate idea, how is it useful for poverty eradication? Moral rights are often the basis of legislation, such as the inalienable rights basis of the American Constitution and Bill of Rights. The Universal Declaration of Human Rights (its 60th anniversary is in 2008) inspired many countries to bring about this legislative change. Quoting Herbert Hart, Sen notes that the concept of a right belongs to morality and is concerned when one person’s action is limited by another – this is what can appropriately be made “the subject of coercive human rules.” So using this Sen provides a motivation for legislation. Sen also points out a motivation for the ethics of human rights through monitoring the behavior of the powerful and governments, like Human Rights Watch, Médecins Sans Frontières, Amnesty International, and many others do.

Sen relates King and Gandhi in their call for peaceful protest, and thus enacting social reform that way. Sen believes religion plays a large part in social reform (Sen is an atheist but King invoked God frequently), but he says the argument does not rest on the religious components. Following King, Sen discusses the story of Jesus and the Good Samaritan and boils it down to the question of how a neighbor is defined. In the story Jesus argues with a lawyer’s limited conception of duty to one’s neighbor using strictly secular reasoning. Jesus tells the lawyer a story of a wounded man in need who was helped eventually by the Good Samaritan: Jesus asks the lawyer, when this is over and the wounded man reflects on it, who was the wounded man’s neighbor? The lawyer answers that the man who helped him is the neighbor, which is Jesus’s point. Using this understanding of the story Sen concludes the motivation to treat others as equals is not what matters – what matters is that in the process a new neighborhood has been created. Sen says this is a common understanding of justice and pervasive since we are linked to each other in myriad (growing) ways. “The boundaries of justice grow ever larger in proportion to the largeness of men’s views.” Shared problems can unit rather than divide.

Sen concludes that no theory of human rights can ignore a broad understanding of human presence and nearness. We are connected through work, trade, science, literature, sympathy, and commitment. This is an inescapably central engagement in the theory of justice. Poverty is a global challenge and there are few non neighbors left in the world today.

To whom to these human rights apply? Obviously everyone. Quoting Martin Luther King’s speech from the Lincoln Memorial in 1939, Sen decries “the fierce urgency of now” to “make good on the promises of democracy” and to make “justice a reality for all of God’s children.”

Crossposted on I&D Blog

Do you Know Where Your News Is? Predictions for 2013 by Media Experts:

Jonathan Zittrain, co-founder of the Berkman Center, is moderating a panel on the future of news at Berkman’s Media Re:public Forum. The panelists were given two minutes and gave us some soundbites.

Paul Steiger is Editor-in-Chief of ProPublica, a non profit with 25 journalists created to fill the gap left by the shrinking newsrooms in the country. He was a Wall Street Journal managing editor for 16 years previously. When he was at the WSJ, he remembers 15% of the budget being allocated to news and the rest to operations, and now at ProPublica more than 60% of the budget is on news. This is due to the web and how easy operations are now. When asked about his vision in 2013, he doesn’t anticipate making money since their mandate is not to sell advertising and remain a nonprofit.

Jonathan Taplin is a Professor at USC Annenberg and a former producer of films with Bob Dylan and Martin Scorsese. He worries 2013 might bring commercial overload and not just an information overload. He agrees with David Weinberger that the struggle will be over meta-data. He sees an advance of the commodizing of freedom – social networks mine information about you even though they seem free. So he sees an eventual FaceBook/MySpace type polarization widely on the web where some users are in an ad free world they pay for and others in a free world full of ads. These become two separate world that don’t interact.

Jennifer Ferro is Assistant General Manager and Executive Producer of Good Food at KCRW. She sees a convergence of devices and platforms where devices become less relevant. She doesn’t think people are going to carry radios and the internet will become pervasive with a backbone of media sites people primarily visit.

Jonathan Krim is Director of Strategic Initiatives of Washingtonpost.Newsweek Interactive. He thinks the traditional story telling model, based on objectivity, will be abandoned and journalists will seek to attribute all points of view to others. He sees the blogosphere, television, and some print pioneers creating spaces where reporters are free to write what they know – where the quality of the reporters is important and considering the other side is important. This means that we will approach something closer to a press that reports along certain lines that will identify them. Krim believes this scenario enhances the credibility of the journalists and allows for wider sourcing and more public participation.

Lisa Williams, of Placeblogger.com, sees shorter job tenure with a greater number of popular journalists rather than a cabal of a few. This gives a wider breadth to the stories and more depth: for example 6000 amputee soldiers have returned from Iraq – but how many have been fitted with prosthetics? Important questions like this would be tough to answer in a traditional newsroom but in 2013 the media will be capable of answering this.

David Cohn, from digidave.org and Newstrust.net, has 2 mantras: 1) the future is open and distributed and 2) journalism is a process not a product. Cohn sees these converging to the question how does the process become more open and distributed? He wants newspapers to be more like a public library in that they are a source of information about your community. He follows ideas in Richard Sambrook’s talk last night in that he wants to content to be open and distributed through networked journalism.

Jon Funabiki is a Professor of Journalism at San Francisco University. He thinks dialog in 2013 will center around our passions. He sees 3 trends: 1) increasing democratic diversity in the US and increasing globalization 2) an explosion of ethnic new media from identity based communities 3) the increasing practice of community based organizations using new media tools like journalistic narrative story telling designed to move services to communities. So he wants to couple old media with new community produced media since it all contributes to the ongoing civic dialog.

Solana Larsen is managing editor of Global Voices and previously a commissioning editor of Open Democracy. She is worried about journalistic integrity – journalists interviewing journalists who are on the scene and reporting secondhand information with an aura of knowledability. She wants journalists to talk to local people and be honest with their audiences about how much they really know about the topic. She thinks in 2013 there will be no foreign correspondents and news will be reported by people who understand the local context and culture.

Crossposted in I&D Blog

Media Re:public Forum Panel on Participatory Media: Defining Success, Measuring Impact

Margaret Duffy is a Professor from the University of Missouri School of Journalism and she is speaking at Berkman’s Media Re:public Forum. She leads a Citizen Media Participation project to create a taxonomy of news categories and get a sense of the state of citizen media via sampling news across the nation. They are interested in where the funding in coming from, the amount of citizen participation, and getting an idea of what the content is. They are also creating a social network called NewNewsMedia.org connecting seekers and posters to bring together people interested in the same sorts of things.

She’s sampled the country in local regions and found that, for example, Richmond Virginia is a hotbed for citizen journalism and blogging and says their methods of connecting to each other are unique. This suggests that blogging and citizen media seems to remain a local phenomenon. Across the country, they were suprised by how the sites were not all that particpatory, for example there isn’t much capability to upload on these sites. She suggests this is because gatekeeping seems very important and blogs tends to be tightly controlled by their authors. They also have seen a lot more linking to outside their sites and many blogs are trying to sell advertisihng (with highly varying levels of success).

The driving force behind the project is the idea that from a social capital standpoint they think that strong community connection make a difference to how to community survives in a democratic process. Her results on the local nature of citizen media suggests a more traditional notion of what a community is. Ethan Zuckerman discusses that community can define itself by local geography or aroudn subject matter and he suggests (referencing the talk below) that we are developing new metric for monetizing site based on reaching the right community and how we define the community is important for the sustainability of websites.

Duffy is followed by Carol Darr, director of the Institute for Politics, Democracy and the Internet (ipdi) at George Washington University. She is discussing the “Media Habits of Poli-fluentials” and building on work from The book The Influentials by Ed Keller and Jon Berry. The idea is that one person in ten tells the other nine how to votes, where to eat, etc. The interesting thing Darr notes is that poli-fluentials (her term) are not elites in the traditional sense but local community leaders and ordinary folk who appear to be knowledgable to their peers. She notes that people who seem to know a lot of people tend to be these poli-fluentials.

In a study she published at the www.ipdi.com the internet users political campaigners had traditionally not focused on are in fact the most active and most connected people in their local community. So now the campaigns and news media understand their audiences differently. If you read a newspaper or watch Sunday morning talk shows and PBS you are more likely to be a poli-fluential (about doubling your odds). Interestingly, purchashing political paraphenalia online increases your odds of being a poli-fluential about 5-fold, as with joining political groups and actively emailing representatives. But the kicker is that people who are self-declared independents who made a political contribution are 80 times more likely to be a poli-fluential than not.

Can we find sustainable funding models for citizen journalism? She suggests the poli-fluentials are the ones to target by advertisers since their opinions are those that filter out influentially to the community and where you get the most band for your advertising buck.

ini the panel discussions following the talk, Marc Cooper from the HuffingtonPost and a USC professor comments on how much it matters who is reading his site. He wants to maximize this number, rather than target the poli-fluentials. Impact is whether people are reading the stories, whether they filter into the broader media and whether they spawn debate. Clint Ivy from Fox Interactive Media suggests that you need to decide whether your goal is to make money or not and the appropriate metric flows from this. He uses the number of comments per post to measure influence, others might just decide whether or not they get a sense a satisfaction from blogging. Dan Gillmor, another Berkman fellow and Director of the Walter Cronkite School of Journalism and Mass Communication at ASU. reframes the problem as one of finding the right things to measure – how do you get a handle on the community mailing list that never bubbles out beyond the community. He thinks this things are enormously valuable and get overlooked. Ethan Zuckerman of GlobalVoices and another Berkman fellow is concerned about agenda setting and whether the right stories are coming up onto the front page and he is worried about the fact that the numbers tend to reflect not influence but whether the stories are important and underheard. Is is easy to get many hits on your blogs by picking a sensational story but having tens of hundreds of the right readers reading the right story is tough to measure. Marc Cooper questions whether any of these questions are new in the digital age or just a rehashing of the same question journalists have always faced.

Crossposted at I&D Blog

John Kelly: Parsing the Political Blogosphere

John Kelly is a doctoral student a Columbia’s School of Communications, a startup founder (Morningside Analytics), as well as doing collaborative work with Berkman. He’s speaking Berkman’s Media Re:public Forum.

Kelly says he takes an ecosystem approach to studying the blogosphere since he objects to dividing research on society into cases and variables because it is an interconnection whole. This isn’t right and basic statistical methods that use variables and cases and designed specifically to take interconnections into account. What he is doing with the research he presents today is using a graphical tool to present descriptions of the blogosphere.

Kelly shows a map of the entire blogosphere and the outlinks from the blogosphere. Every dot is a blog and any blogs that are linked are pulled together – so the map itself looks like clusters and neighborhoods of blogs. The plot seems slightly clustered but there is an enormous amount of interlinking (my apologies for not posting pictures – I don’t think this talk is online). In the outlinks maps to links from blogs to other sites – the New York Times is most frequently linked to and thus the largest dot on the outlinks map.

Kelly compares maps for 5 different language blogospheres: English, Persian, Russian, Arabic, and Scandinavian languages. Russian has very separate clusters and other languages get progressively more interconnected. In the Persian example, Kelly has found distinct clusters of ex-pat cloggers, poetry, and religious conservative bloggers concerned about the 12th Inam, as well as clusters of modern and moderately traditional religious and political bloggers. Kelly suggests this is a more disparate and discourse oriented picture than we might have thought.

In the American blogosphere Kelly notes that bloggers tend to link overwhelmingly to other blogs that are philosophically aligned with their own blog. He shows an interesting plot of Obama, Clinton, McCain blogopsheres’ linking patterns to other sites such as thinktanks and particular YouTube videos.

Kelley also maps a URL’s salience: main stream media articles peak quickly and are sometimes over taken by responses, but Wikipedia article keep getting consistent hits over time.

The last plot he shows is a great one of the blogs of the people attending this conference (and their organizations): there are 5 big dots representing how much people have blogged about the people – main stream media sites are the 5 big dots. Filtering out of those gives GlobalVoices as the blogs people mainly link to.

Crossposted on I&D Blog

David Weinberger: How new technologies and behaviors are changing the news

David Weinberger is a fellow and colleague of mine at the Berkman Center and is at Berkman’s Media Re:public Forum discussing the difference the web is making to journalism: “what’s different about the web when it comes to media and journalism?”

Weinberger is concerned with how we frame this question. He prefers ‘ecosystem’ rather than ‘virtue of discomfort’ since this gets at the complexity and interdependence in online journalism. But the ecosystem analogy is too apt and too comforting and all-encompassing so he pushes further. He doesn’t like the ‘pro-amateur’ analogy since it focuses too much on money as the key difference in web actors, and yet somehow seems to understate the vast disparity in money and funding. The idea of thinking of news as creating a better informed citizenry so that we get a better democracy doesn’t go far enough – Weinberger notes that people read the news for more reasons than this.

So he settles on ‘abundance’ as a frame due to the fact that control doesn’t scale which is something being address currently with online media. “Adundance of crap is scary but abundance of good stuff is terrifying!” The key question is how to deal with this. We are no longer in a battle over the front page since other ways of getting information are becoming more salient. For example, Weinberger notes that “every tag is a front page” and email recommendations often become our front page. He sees this translating into a battle over metadata – the front page is metadata, authority is metadata – and we are no longer struggling over content creation. So we create new tools to handle metadata – in order to show each other what matters and how it matters. Tools such as social networks and the sematic web. All these tools unsettle knowledge and meaning (knowledge and meaning that has not been obvious but was always there).

Crossposted on I&D Blog

Robert Suro: Defining the qualities of information our democracy needs

Robert Suro is a professor of journalism at USC and spoke today at Berkman’s Media Re:public Forum. His talk concerns journalism’s role in democratic processes and he draws two distinctions in how we think about journalism that often get conflated: journalism is a business but also a social actor. he points out that when main stream media’s profitability decline we shouldn’t make the mistake of assuming its impact of in the democratic arena declines as well.

He also has trouble with the term “participatory media” and draws a distinction between the study of who is participating and what means they use (his definition of participatory media) and “journalism of participation” which evaluates the media in terms of a social actor – the object is effective democratic governance. He is worried these two concepts get confused and people can mistakenly equate the act of participating in the media, for example adding comments to a web site, with effective participation in the democratic process.

The result of this distinctions is that if you want to assess participatory media in terms of social impact you have to study more than who they are and what they produce but also whether this activity is engendering civic engagement that makes democracy more representative and government more effective.

Suro notes that this isn’t new: he hypothesizes that journalism doesn’t change often but when it does it is a big change, and we’re in the middle of just such a change right now. As an example of a previous change he gives the debate between two editors who were interested in the creation of civil society. One was supported by Jefferson and Madison and the other by Hamilton and Adams. Both were partisan in what they said and who funded them and both were committed to democracy but understood the role of the state differently, resulting in the creation of the democratic and republican parties. Although both would be fired as editors today there is a long history of social democratic results in journalism and the fundamental role of journalism in a democratic society is subject to change. We should study the ongoing redefinition and try and understand causality and impact.

Suro also thinks the Lippmann/Dewey argument about whether the goal of journalism should be to produce highly informed elites or mobilize the masses and create informed debate is alive and well. He suggests we have always produces a mix of these outcomes and will inevitably continue to do so, but now we have the address the mix of journalistic processes. He thinks the right way to look at this is to asses what outcome to they produce in terms of quality of leadership. Suro also touches on Cass Sunstein’s polarization concern in that is will produce less effective governance: we need to understand how a mix of new and old media can create a megaphone that artificially amplifies a voice that might not be the most effective.

Crossposted on I&D Blog

Richard Sambrook at the Media Re:public Forum

I’m at Berkman’s Media Re:public Forum and Richard Sambrook, director of Global News at the BBC is giving the first talk. He is something of a technological visionary and his primary concern is with how technology is affecting the ability of, not only traditional media but anyone, to set the international news agenda.

The model that news stories may break on the blogs and travel to main stream media seems incomplete to Sambrook and he hopes to use the news audience to develop the agenda in an interactive way through network journalism. An example he gives is how the BBC puts their NewsNight show’s agenda online in the morning and invites people to comment on the choice of stories and angles they are taking on them. But this seems quite small, and as Ethan Zuckerman points out in a question, not much of a change in paradigm: Zuckerman laments that main stream media is trying to involve the public on their terms and in their way, through site-hosted comments and being quite closed about sharing their content. Sambrook explains this as slowness of cultural change at organizations like the BBC and is changing. For example, BBC video can now be hosted on any site. Sambrook is also worried that they just can’t seem to find the audience – the right people to engage with in various areas. He notes that the top ten sites (Google, Yahoo, Wikipedia Fox News etc) control 1 billion eyeballs. He doesn’t think current business models are sustainable and perhaps energy should be directed into a different metric than eyeballs to more accurately measure engagement and be able to monetize it.

Sambrook notes that across main stream media it is well understood that the future of news is online but there are cultural legacies within main stream media and even where there aren’t solutions to new problems aren’t obvious. Sambrook gives the example of the BBC’s river boat trip through Bangladesh. They experimented with several ways of reaching potentially interested audiences: twitter, google maps to track the boat, images on Flickr, radio and traditional news. They had 26 followers on twitter and 50k on Flicker but millions on the radio. This highlights the difficulty news outlets are having reaching their audience – the methods chosen are key, and how to do this is not obvious.

Sambrook says that he sees an upcoming tipping point for the data-driven web, or semantic web, in news applications. For Sambrook, this manifests as an improvement in the personalization of news. He mentions the BBC’s dashboard tool – a way to pull content from all over the BBC’s website to suit your interests and tastes. He is also concerned about the tension with agenda setting: “who is the curator of the kind of news you are interested in?” This also brings to mind Cass Sunstein’s polarization critique of the internet, especially for news delivered online – that we will only seek out news that fundamentally agrees with our own opinions and create echo chambers in which we never hear opposing thinking and thus open discourse and debate becomes stultified. He seems to see the future as communication within communities and he frames the problem as finding the right community and getting them involved in an effective way.

Crossposted at I&D Blog