After writing a long-form post reviewing Peterson and Panofsky’s new sociology article on metascience here, I realized I still had more to say. It turns out that was another thousand words (after editing). Clearly I have a bee in my bonnet about this one (to use a metaphor that does not apply to either my local fauna nor headgear).
That take is up on my blog at Psychology Today now, and would be a good natural follow-up to what I wrote.
I’ll quote the crucial paragraphs from the article here:
Metascientists could have raised new theoretical objections throughout their last decade of inquiry. Should psychologists really be spending all their time establishing new effects, or should they instead be carefully homing in on mathematical models that explain exactly how one effect works? Should we worry about whether established effects are reliable, or should we instead consider whether we’re measuring think that matter? Should we test whether there really are hidden tricks experienced researchers use to get studies to work, or should we argue that failing to report methods needed to get a study to work is scientific malpractice?
Instead of arguing for what the goals of psychology research should be, the psychology-focused metascience research group at the Center for Open Science (COS) has sought to listen to and respect the claims being made by researchers in their area, and to conduct studies that evaluate these claims. Research questions arose organically, often by attending to the concerns of critics. This line of work demonstrates that (at least in psychology), there is a theoretical through-line to metascience. It is an approach that should be familiar to anthropologists and sociologists conducting ethnographic research: listen to what a community says is important, and treat this as your object of study.
My point in writing this is not to say that critiques of the Credibility Revolution are bad or wrong. I’ve learned a lot from critiques of the movement, and I think there’s a lot that can be improved. My point instead is to better situate the critiques against the broader backdrop of activity that have led to huge change in research practices in the last decade.
The way psychologists approach science now is markedly different now from how we did when I started graduate school just under a decade ago. People routinely talk about whether large enough samples were collected to actually detect the effect researchers want to study. People routinely ask whether an idea is something that was predicted ahead of time based on careful thought, or whether the idea is new and came from looking at the data being reported. People are much more likely to make their data and questionnaires publicly available for anyone else in the scientific community to use.
Not all of these changes will end up improving science dramatically—or perhaps at all. But to me using criticisms of particular reforms as an excuse to dismiss the reform movement as some kind of grift is to miss the forest for the trees. We are in the middle of a period where improved science feels possible—possible in a way it hasn’t even in the very recent past. It’s worth arguing about the best ways to improve, but I think we should acknowledge that part of what created this moment is the energy of young scientists trying to make a positive change—even if they weren’t getting all the details right.