The most devastating sentence in David Peterson and Aaron Panofsky’s new sociology article is: “Metascience is only rational in a world where science can be unified by ‘universal policies and procedures’ and, thus, where it makes sense to think about science as a unitary object.” As the sociologists of science point out, few scholars in the philosophy or social studies of science agree with this premise. One of the “central findings of science studies,” as they put it, is that science is not a single, unified thing but a diverse, disunified collection of fields. There is no one scientific method, and “efficiency” is not a meaningful concept in basic research. You can’t optimize for the routine production of breakthroughs.
The article sounds a warning cry about the imperial ambitions of Metascience, a new reform movement born out of psychology and medicine’s Credibility Revolution. It has almost entirely ignored the contributions of qualitative social science, and it threatens to impose rules and norms that would undercut necessary scientific diversity. For people like me, who are part of the Credibility Revolution, it’s worth considering the scope and limitations of our efforts.
A central takeaway is that methodological advice is never universal. This is particularly true when we consider the “good statistics” part of Metascience. Psychologists, biologists, and medical researchers typically use the same kinds of statistical models over and over again, often to analyze experiments set up in the same way. When the same set-up comes up over and over again—like a Randomized Controlled Trial in medicine or a 2 x 2 ANOVA in social psychology—statistical and methods experts can pick out the common errors. Advocates in the field can then push for guidelines and norms that rule out these common errors.
When we consider “big-S Science” beyond these fields, however, fixed guidelines or recommendations are less practical. Preregistering results doesn’t make sense if you’re doing mathematical proofs, as some physicists, computer scientists, or mathematical psychologists are. Replication may not matter for a statistical analysis of the historical effect of the 2008 financial crisis on well-being, because the point isn’t to replicate that event. Collecting large samples may not be necessary if a clinician’s goal is just to document that Dissociative Identity Disorder is possible and presents in some real patients.
Metascience therefore needs to remain aware that the statistical recommendations being advocated, while they may apply broadly, do not necessarily apply to All of Science. Giving special rewards and denigrating research based on whether it does or does not conform to these standards can therefore be counterproductive. Recommendations have a scope, and apply to research that is trying to do specific things—establish the presence or absence of a therapy on mental health, for example—but cannot be expected to help in every possible case.
Just promoting open sharing of data (the “open science” part of the movement) might not work in cases where privacy needs to be protected. Focusing Metascience on quantitative studies of scientific output (the “science of science” part of the movement) might shift focus away from useful philosophical discussions of how best to produce science. Elevating “good stats, sharing everything, and always measuring in numbers” to The Principles of All Good Science can push other concerns to the side, and potentially lead important work to be given less regard than it deserves.
I am sympathetic to the idea that there needs to be flexibility in how we evaluate science. Not every scientific contribution is trying to establish the effect of a treatment, for example, and so we need some elbow room to evaluate contributions on their own terms.
Yet there are parts of this piece that I found dismissive, such as the framing of a desire for more accurate results as part of a “moral crusade” led by “moral entrepreneurs,” and the rhetorical bracketing of reformer’s beliefs and motivations in a way that made them appear self-serving or disingenuous. Calling a movement a moral crusade is to criticize it, as it invokes a web of negative associations—especially in regards to science.
In my experience, the vast majority of people in the Metascience community genuinely believe that the community’s reform proposals will improve science, and have come to it after serious questioning of dominant paradigms in their own field. Moreover, this is a community that is full of disagreements, and many criticisms raised about whether a particular proposal is workable for scientists using different methods have been debated internally.
Peterson and Panofsky acknowledge this when they quote Metascience conference attendee Jevin West as saying “the more qualitative work I do, the more I realize we have to be careful about not relying too much on some of these macro level tools.” This member of the metascience community wants to advocate for cautious use of the tools of science reformers, and appreciates other ways of knowing about the world. They also mention that several interviewees wanted greater communication between Metascience and historical, philosophical, and qualitative research traditions. Certainly, there is passion for reform, but it is not the “blind passion” evoked by the moral crusade metaphor. There may be a central set of guiding ideas, but members have indicated a willingness to change opinions over time, and to listen to insights from other fields.
Peterson and Panofsky also write that “the narrative of crisis must not be taken at face value,” and that they will not debate the merits of the case. Yet they manage to include a lengthy footnote repeating key claims from critics of the movement. These are not bracketed or lined up against counter-claims, nor are the strongest points from both sides considered. The sociologists claim skeptical neutrality, but still find a way to sneak in unanswered criticisms.
Overall, the piece struck me as raising important questions. How does moving in the direction of a more stats and numbers approach to studying science contrast with a more theories and ideas approach? What might we be losing? What can we learn by giving more attention to previous periods of (or attempts at) scientific reform? Is having one central field promoting “best practices” for all of science legitimate, or should claims be more modestly scoped, to deal with a few large areas of research that have the same set of recurring problems?
Yet it also struck me as having a slant. The implicit message seemed to be, “this new Metascience movement that has received so much positive press and funding is hugely problematic!” and perhaps, in a quieter voice, “and they would have realized it if they’d been talking to sociologists like us who do qualitative work.” It is a kind of criticism that invites dismissal, not engagement.
What’s missed, to me, is the respect that ethnographers might be expected to have for a bottom-up effort to change community norms for the better. A key element of the success of the modern Metascience movement is precisely that it has come from people working first on the problems in front of them, to improve their own practices and encourage change in the practices of their friends and colleagues. The Society for Improving Psychological Science (SIPS), Retraction Watch, and Facebook groups for discussing scientific methods have grown out of working scientists’ needs to discuss their concerns about how to improve their own work.
Metascience solutions are explicitly not coming from the measured perspective of outside interviewers. In fact, the biggest problem being pointed out here is that Metascience as a movement seems to be growing to encompass fields where it hasn’t yet found this kind of bottom-up support, and where it therefore doesn’t have the benefit of bottom-up solutions proposed by members of those research communities.
As the sociologists found in their interviews, there are clearly people like me in Metascience: “movement insiders” who want to take practical actions to improve scientific practice now, but who are open to outside ideas—including critical discussion of science reform efforts. From my perspective, Metascience is a young movement that is just coming into its own. Some of its recommendations may end up working better than others, and some might even need to be abandoned as they prove to be counterproductive. Yet I believe that many of its best ideas over the next decade will come not from further refinement of the ideas of psychometricians or other high end statisticians, but from members of broader research communities engaging in—and not dismissing—this dialogue on how to improve specific scientific practices.