Iris van Rooij is interested in scientific reform, but does not subscribe to the traditional solutions put forward by the Credibility Revolution. Replication may be a useful method for showing that there is a problem with traditional methods in psychology, but simply doing more replications will not lead to the kind of substantial improvement in psychology research she advocates. She might even go so far as to call it a step in the wrong direction, one that misses the deeper issues in psychology research: a lack of theory. It has taken me quite a while to grok what van Rooij has been saying in recent years, including some rereading. Yet I am beginning to understand what she is advocating.
The current practice in psychology is to rush out and measure things. If you want to study loneliness, for example, you would go out and measure how many friends people have, how lonely they say they are, how often they go to bars and coffee shops, how often they report being depressed. etc. Anything you can think of is fair game, and any relationship found between loneliness and another variable is considered a useful insight.
To van Rooij, this approach ignores that at some point we will need to put all these facts together into some kind of useful explanation of how loneliness works. The metaphor she uses is writing a novel by using random sentences. All the various facts we’ve established about loneliness (it’s related to depression, it’s more common in older adults, it’s not closely related to number of friends, etc.) are the random sentences. Certainly many of them are interesting. Yet they don’t all serve the deeper purpose of creating a central explanation. That’s the novel our science should be writing.
How do you write a novel, then? Well, you start by thinking through the broad outline of the plot. You consider what needs to happen in general, and how the different pieces will fit together, and then you start to fill in the details as you do the more careful, painstaking work of actually writing. Similarly, van Rooij advocates that psychologists do some careful thinking about the broad outline of their explanation before they start measuring things. This will give a better sense of what actually needs to be measured.
Maybe your theory starts out by suggesting that loneliness is like a reservoir that slowly depletes and needs continual contact to boost its levels back up again. In that case, it will be very important to measure how long people go in between conversations, but overall number of friends won’t be as important.
The theory here is something you invent. It’s a creative and thoughtful act of science, and it should be specific. The constraint comes in when you start to measure things. As you start to measure, you might find that your model doesn’t line up with what you observe. If that happens, you don’t falsify or “throw out” the model. You first see if there are adjustments that can be made to bring it more in line with what you observe. Then you see what those adjustments say you should measure next. Maybe you’ve changed things to suggest it’s not everyday conversations, but deep conversations that matter, and now you’re obliged to go out and try to get a rating of which conversations are superficial versus deep.
What’s new here is the idea that all new data collection is tied to from the start to some theoretical explanation. Every new sentence you write is connected in some way to the broader story the novel is trying to tell. Psychologists running new studies would have to read the work of the theory-builders to see what gaps need to be filled in, what’s worth measuring. Psychologists building theory would need to stay up-to-date on the latest findings to be able to keep their models in line with reality.
This method of always connecting new data collection to a model would help with certain problems in psychology. First, there is the literature review as a list of facts: many of the research reviews I have read on a given topic in psychology feel like defensive recountings of reasons that topic is important. Loneliness is important because (i) it is related to depression, (ii) it is related to heart disease, (iii) it is related to political outcomes, (iv) it is related to a good family life, etc. After reading one of these reviews, you tend to think “well, loneliness certainly is important!” The only outcome that comes from it, though, is new researchers feeling that they really ought to include some measurement of loneliness in their upcoming studies (since it’s so important!). It’s not really clear how all these facts relate to each other, or if some are downstream consequences of other processes.
Second, there is the lack of specificity in how to act on the facts. There might be a section in this type of review that lists all the factors that have variously been related to lower loneliness, but there isn’t much of an explanation beyond “these things seem to work.” That might be enough to roll out a therapy or public policy program, but it’s not enough to carefully tune or adjust such a program--or to predict whether it will work in a new context. For that, we’d need to know which key factors actually matter for loneliness and how they work together.
Some psychologists might object that they are already doing this kind of thing. Certainly studies are frequently published in psychology that show when an effect is moderated by another factor (although they are among the least replicable types of findings). For example, we might see that losing your job increases loneliness--except if you have a strong church group. Yet these moderating factors have the same ad hoc quality as the effects psychologists currently change. There was no particular reason that went into checking whether church attendance changed things, other than a hunch.
Without having that broader story in mind, the door is always open to the possibility that anything and everything could potentially change a key effect, and we just can’t know until we measure it. With the broader story in mind, we are able to make the bolder, more assertive claim that “it is these things specifically, interacting in this way--and only that--which matters.” This is obviously not going to be exactly right, but the ability to always be tinkering on the theory, updating it as new data comes in, means that over time it should converge on a relatively parsimonious, plausible effect.
In general, this approach makes sense to me, and I’ve outlined reasons to be optimistic about it. There are also aspects of it that worry me. The first is that this approach suggests that we should place less importance on “dust bowl empiricism,” or the idea that it’s important to go out and establish facts without needing a deeper theoretical context. It feels in tension with an important call for more descriptive work in psychology, where we just go out and get the lay of the land in many areas of research. It does seem like just asking people across several countries “how lonely do you feel?” could be an important first step to a research program on developing a theory of loneliness. Indeed, some people interested in building theoretical models suggest that you really want to have a few well-established facts when you start doing your more formal modeling. You use these as guideposts, and know that whatever you propose has to account for them.
That isn’t necessarily a reason to dismiss van Rooij’s “think about the explanation first” approach, but it might come in conflict in setting research priorities. Would psychology be better off if many new or early career researchers started shifting their energies towards model building? Or would it be better off if they started trying to make careful descriptions of things we care about, like how many friends people have in different countries and how close they feel to them?
The other piece that scares me--but that maybe shouldn’t--is the offloading of the creative work of theorizing to mathematical modelers. Many psychologists who know very little math also enjoy the creative act of building explanations of their topic areas. Moreover, they probably have a decent store of background knowledge just from having read a lot and run their own studies in the area. In the current environment, these experts would not be doing “real theory-building” unless they learned (or collaborated with someone who learned) some mathematical modeling. In other words, expertise in specific psychological topics is currently separated from expertise in model building.
My instinct is to bite the bullet on this one and just to admit, “Yes, what current theorists have been doing just isn’t good enough, and they won’t be able to do it well until or unless they start using new tools. And yes, those tools are math-y and hard to learn. Sorry, that’s just how it is.” Yet that’s a very hard sell to psychologists, even those who are invested in the current reform movement. In essence, it’s arguing that the next generation of great psychology is not going to come from people with the skills learned in a top-ranked, mainline psychology program.
So what comes next? Well, van Rooij is writing a textbook on theory building, and will be holding a workshop on it next year. More researchers across other areas of psychology are also advocating for formal modeling and changing current approaches to theory building. These will be important resources for trying to figure out what it would mean to shift the norms of the field. If the last decade of psychology has taught us anything, it’s that big changes in scientific thinking--and practice--are possible.
ALEXANDER....I HAVE WRITTEN 9 NOVELS OVER A PERIOD OF 50 YEARS AND ALL WERE BEGUN WITH A PEN, A PIECE OF PAPER AND AN INBORN DESIRE TO WRITE...MY PURPOSE WAS TO EDUCATE AND ENTERTAIN AND THE METHOD WAS EMPERICAL...THE OUTSTANDING WRITERS WHO PRECEDED AND HELPED ME ALONG ON MY JOURNEY, WERE SHORN OF AI METHODOLOGY....THE HUMAN ELEMENT WILL ALWAYS WIN OUT....READ MY ARTICLE...revistapolemica.blogspot.com....LA INTELIGENCIA ARTIFICIAL Y LA INTELIGENCIA HUMANA...AND TELL ME WHAT YOU THINK....UNCLE TOMMY