Mistakes by Piketty are OK (even good?)

In an email conversation I tried to make some points about the criticism Piketty has come under for apparently having mistakes in his data. I think the concerns are real but misplaced. Here’s why:

There was a point made in the reproducibility session at DataEDGE this month by Fernando Perez that I think sums up my perspective on this pretty well: making good faith mistakes is human and honest (and ok), but the important point is that we need to be able to verify findings.

Piketty seems to have made an enormous contribution (I haven’t read the book yet btw) by collating numerous disparate data sources, and making this data available. I think sometimes folks (like the Financial Times for example) have the idea that if the academy publishes something it is a FACT or a TRUTH – currently there seems to be a cognitive gap in understanding that research
publications are contributions to a larger conversation, one that hopes to narrow in on the truth. Feynman has a nice way of expressing this idea:

…as you develop more information in the sciences, it is not that you are finding out the truth, but that you are finding out that this or that is more or less likely.

That is, if we investigate further, we find that the statements of science are not of what is true and what is not true, but statements of what is known to different degrees of certainty: “It is very much more likely that so and so is true than that it is not true;” or “such and such is almost certain but there is still a little bit of doubt;” or – at the other extreme – “well, we really don’t know.” Every one of the concepts of science is on a scale graduated somewhere between, but at neither end of, absolute falsity or absolute truth.

It is necessary, I believe, to accept this idea, not only for science, but also for other things; it is of great value to acknowledge ignorance. It is a fact that when we make decisions in our life we don’t necessarily know that we are making them correctly; we only think that we are doing the best we can – and that is what we should do. [1]

I think viewing Piketty in that light makes his work a terrific contribution, and the fact that there are mistakes (of course there are mistakes) doesn’t detract from his contribution, but just means we have more work to do in understanding the data. This isn’t surprising for such a broad hypothesis as his, and it also isn’t surprising when you consider the complexity of his data collation and analysis. Any little tiny mistake, or even just a different decision, at any point along the line could change the outcome, as appears to be the case. It’s like waiting tables – if we sum up all the little ways a waiter or waitress could lose some tip, it would be easy to lose the entire tip! My hope is that the public discussion (and the scholarly discussion) moves toward an acceptance of mistakes and errors as a natural part of the process and contributes to minimizing them rather than attempting to discredit the scholarship completely. My advisor once wrote a short piece on being a highly cited author, and among other things he said to “leave room for improvement” when publishing since it is “absolutely crucial not to kill a field by doing too good a job in the first outing.” [2] In that light Piketty’s done a great job.

Of course all this changes if there was deliberate data manipulation or omission.

ps. I put together some views on Reinhart and Rogoff here, but imho it’s a red herring in the Piketty discussion, except insofar as both are examples that help flesh out standards and guidelines for data/code release in economics:
http://themonkeycage.org/2013/04/19/what-the-reinhart-rogoff-debacle-really-shows-verifying-empirical-results-needs-to-be-routine/

[1] http://calteches.library.caltech.edu/49/2/Religion.htm

[2] http://www.in-cites.com/scientists/DrDavidDonoho.html

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *