Lay People Can Predict What Replicates
New research makes us ask: how did it get published in the first place?
Over at my Psychology Today blog I posted an essay comparing the hype around WeWork to the hype around counter-intuitive social psychology findings. The article was an attempt to do something new, weaving in broader issues in the culture with commentary on the scientific method. I don’t know that it was completely successful, but I do think that the core metaphor is one that’s worth thinking more deeply about.
Here are two paragraphs that I think best capture what I was trying to say:
Neuner didn’t know how to get investors to invest in his smaller, more realistic claims compared to WeWork’s outlandish ones. As he put it: “Do I tell VCs, ‘You know, WeWork must be lying, so you should accept my smaller returns instead’? No one wanted to hear that.” This kind of prioritizing of bold, counter-intuitive, even outlandish claims also feels familiar to anyone who was reading social psychology research from the 90’s and 00’s.
The argument is laid out in a 2014 blog post by former Society for Personality and Social Psychology (SPSP) President David Funder. He wrote: “It seems clear that grant panels and journal editors have traditionally overvalued flashy findings, especially counter-intuitive ones. … But ‘counter intuitive’ means prima facie implausible, and such findings should demand stronger evidence than the small N’s and sketchily-described methods that so often seem to be their basis.” Like investors in co-working business spaces, academic journal editors and scientific funding bodies preferred to back bold, unrealistic claims over more measured, realistic ones.
What might have been lost in the write-up was more focus on a very nice study looking at whether lay people could predict replication. The headline is that non-experts can predict whether a psychology study will replicate at 59% just from hearing the details of the study, and 67% when they are also given information about Bayesian strength of evidence. In two prior studies asking experts to predict what will replicate, accuracy was 65% and 72%.
The study is also a nice opportunity to show that people can interpret Bayesian descriptions of strength of evidence pretty well, which is nice for the big pro-Bayesian analysis movement in psychology. Just getting a nice write-up of the Bayesian evidence allowed lay people to predict replication roughly as accurately as an expert researcher. When I first started reading descriptions of how Bayesian statistics are contrasted with Frequentist statistics, I realized the approach was more intuitive. It’s a point Richard McElreath also makes in his incredible stats textbook: students “get” Bayesian reasoning pretty quickly if that’s the first thing they’re taught.
There are also some interesting exploratory analyses in the manuscript showing that laypeople have a bias towards believing that scientific results will replicate more than they actually do. This is a nice thing, because it suggests trust in science, but the downside is that it shows that psychological science isn’t living up to that trust.
As I tried to express clearly at PT, I think this result really raises the question of how so many psychology results turned out to be unreplicable. If you can tell ahead of time that a result seems fishy, why wouldn’t editors and funding bodies reject the project? Perhaps the clique-ishness of funders, like the clique-ishness of VC investors, is part of the problem.