Computational scientists need to understand and assert their computational needs, and see that they are met.
I just read this excellent interview with Donald Knuth, inventor of TeX and the concept of
literature literate programming, as well as author of the famous textbook, The Art of Computer Programming. When asked for comments on (the lack of) software development using multicore processing, he says something very interesting â€“ that multicore technology isnâ€™t that useful, except in a few applications such as â€œrendering graphics, breaking codes, scanning images, simulating physical and biological processes, etc.â€ This caught my eye because parallel processing is a key advance for data processing. Statistical analysis of data typically executes line by line through the data, making it ideal for multithreaded applications. This isnâ€™t some obscure part of science either – most science carried out today has some element of digital data processing (although of course not always at scales that warrant implementing parallel processing).
Knuth then says that â€œall these applications [that use parallel processing] require dedicated code and special-purpose techniques, which will need to be changed substantially every few years.â€ As the state of our scientific knowledge changes so does our problem solving ability, requiring modification of code used to generate scientific discovery. If I’m reading him correctly, Knuth seems to think this makes such applications less relevant to mainstream computer science.
The discussion reminded me of comments made at the â€œWorkshop on Algorithms for Modern Massive Datasetsâ€ at Stanford in June 2010. Researchers in scientific computation (a specialized subdiscipline of computational science, see the Institute for Computational and Mathematical Engineering at Stanford or UT Austin’s Institute for Computational Engineering and Sciences for examples) were lamenting the direction computer hardware architecture was taking toward facilitating certain particular problems, such as particular techniques for matrix inversion and hot topics in linear algebra.
As scientific discovery transforms into a deeply computational process, we computational scientists must be prepared to partner with computer scientists to develop tools suited to the needs of scientific knowledge creation, or develop these skills ourselves. I’ve written elsewhere on the need to develop software that natively supports scientific ends (especially for workflow sharing; see e.g. http://stodden.net/AMP2011 ) and this applies to hardware as well.