Peter Levin is CTO of the Dept of Veteran’s Affairs and has a take on open gov tailored to his department: He’s restructuring the IT infrastructure within the VA to facilitate access. For example, the VA just processed their first paperless claim and is reducing claim turnaround time from 165 days to 40 days.
He is also focusing his efforts on emotional paths to engagement rather than numbers and figures. I hope they can provide both, but I see his comments as a reaction and criticism to open data in general. Levin gives the analogy of the introduction of the telephone – the phone was fundamentally social in nature and hence caught on beyond folks’ expectations, whereas a simply communicator of facts would not. That encapsulates his vision for tech changes at the VA.
James Hamilton of Northwestern suggests the best way to help reporting on government info and the communication of govt activities would be to improve the implementation of the Freedom of Information Act, in particular for journalists. The aim is to improve govt accountability. He also advocates machine learning techniques to automatically analyze comments and draw meaning from data in a variety of formats, like text analysis. He believes this software exists and is in use by the govt (even if that is true I am doubtful of how well it works) and an big improvement would be to make this software open source (he references Gary King’s software on text clustering too, which is open and has been repurposed by AP for example).
George Strawn from the National Coordination Office (NITRD) notes that there are big problem even combining data within agencies, let alone putting together datasets from disparate sources. He says in his experience agency directors aren’t getting the data they need, data that is theoretically available, to make their decisions.