[I]f we take the scientific method and the process of peer review seriously, that commits us to occasionally (or even frequently) publishing work that we believe time will eventually prove wrong.
---Tal Yarkoni, writing at [citation needed]
Yarkoni was writing in the context of the recent infamous paper by Daryl Bem published in the prestigious Journal of Personality and Social Psychology, which purports to show evidence of ESP. Yarkoni's analysis of the paper is truly awesome, and worth a read if you are at all interested in research methodology (which means: nobody except for actual researchers, and perhaps hopeless pedants like myself).
Yarkoni's conclusion is in line with the impression I was already getting from reading other bloggers' and pundits' comments on the paper: It's a good paper as far as it goes, even though the conclusions are almost certainly wrong. (I differ from Yarkoni in the weight that I give to good ol' Thomas Bayes in rejecting the ESP hypothesis -- I frankly think that is the most important indicator of how we know this paper is wrong, and the rest is just a post mortem to determine what went wrong) Bem has been admirably transparent and thorough in documenting what exactly he did. As I've written elsewhere, that in many ways redeems the paper's publication, in that it provides the opportunity for a teaching moment in experimental design and what exactly can go wrong.
The quote from Yarkoni with which I open this post is indubitably true from a research perspective, but in a broader societal context, it's highly disturbing. No, I'm not criticizing Yarkoni; he's damn right, and that's exactly what is so disturbing about what he points out. The public doesn't see it that way. The popular press really doesn't see it that way. For science journalists and the lay public, a single provocative paper published in a reputable journal (and even then, the public -- and to a certain extent I include myself in this -- is generally ignorant as to which journals are reputable) is as good as gospel.
What can be done about this? Yarkoni is right, that if the only papers that got published were the ones which had a very high probability of being correct in their conclusions, it would entirely sabotage the scientific method. In fact, pretty soon you wouldn't be able to publish any papers according to that criterion, because a big part of how certainty is achieved is via a long line of less certain research which all points to the same conclusion. And in any case, many people -- including myself -- are increasingly of the opinion that publication bias is a tremendous problem, resulting in large amounts of waste, as well as retarding the rate of science's natural self-corrective ability. In that light, the last thing we want is to publish fewer papers.
The most effective solution, I think, would be for science journalists to collectively behave more responsibly. But good luck with that. That's only slightly more likely than thinking the entire public is suddenly going to become more skeptical.
There is some hope -- as demonstrated by the recent evisceration of the NASA-funded paper into arsenic-based life forms, the blogosphere is increasingly flexing its muscles in order to keep that sort of credulous nonsense in check. In the past, the blogosphere couldn't really be effective in that role, because even if a zillion science bloggers pointed out the flaws, they were only reaching a niche market of readers -- but now, enough noise in the blogosphere amounts to a news story in itself, one the mainstream media feels obligated to report on.
There's a potential blueprint here: Mainstream science journalists breathlessly report on the latest novel finding of public interest, as if one half-way decent p-value proves some controversial hypothesis beyond a shadow of a doubt; trained researchers blog about how the news stories are all full of shit and explain why the paper is either wrong or should be taken with a grain of salt; other interested bloggers link to and write about these criticisms, and in typical Interwubzian fashion, generate a lot of drama and heat in a short period of time; and finally, that becomes a story in itself, forcing those same mainstream science journalists to write articles playing down the earlier reports.
It's far from a perfect system -- people will in general still remember the early credulous articles rather than the later more skeptical ones, and those with an agenda (Buy my supplements! Don't vaccinate! Religion makes you live longer!) will cite the peer-reviewed-but-wrong papers for decades as if there was nothing wrong with them and they were never contradicted by later research. But the sheer noise the blogosphere is capable of generating, and in such a short time, I think makes the situation somewhat more promising than it has been in the past.
In any case, go back and read the quote from Yarkoni at the top of this post. He's hit the nail on the head here. This sort of thinking should be taught in grade school, and hammered away all through a person's education. Science is messy, and by it's very nature it will frequently produce wrong results -- in fact it is this very embrace of uncertainty which gives science epistemological dominance over all other attempts to get at the truth. We should neither discount science because of its dynamic and ever-changing character (for that is where its power lies!), nor should we assume that every proper application of the scientific method produces an unquestionable truth.
No comments:
Post a Comment