Improving Evidence and Relevance at the Same Time
Where do we go from here? And how do we get there? Those are questions I have been asking myself over the past few years as I’ve reflected on the various crises that have been plaguing the social sciences. I began my graduate training two years after the now famous “false-positive psychology”1 paper triggered a crisis of confidence about the state of evidence in psychology and the other social sciences (e.g., experimental economics)2 that engaged in similar meta-scientific reflections. Then came 2020. After a few years spent working through strategies to address that crisis of evidence,3 the Covid-19 global pandemic and the temporary reckoning about racial and social justice reignited the crisis of relevance.4 In addition to debates about how to improve (quantitative) methods, social scientists also debated about whether the kinds of knowledge our fields were producing were actually useful for speaking to pressing issues in society.5
Taken together, it seemed that we had somehow gotten ourselves into a situation in which, despite having decades of research under our collective belts, we had great difficulty understanding the nature of behaviors well enough to make good predictions about how to change them. This state of affairs has been particularly concerning due to its implications for our readiness to respond during moments of crisis.6 Part of the reason for this status quo seems to be the history of studying a narrow sliver of humanity in a limited set of circumstances,7 which has inhibited our ability to learn about the range of factors that influence people’s thoughts, feelings, and behaviors.8 Moreover, research projects are often developed without input from the people whose lives the work is intended to represent or influence.
These issues are not new; they have been written about extensively for decades.9 Moreover, over the course of my own career, every major conference I have attended has devoted at least one session to talking about them. For a long time, I wondered whether they would be like some of the other issues I’ve encountered in academia—issues that we merely discuss ad nauseum, form task forces to write reports about, but seldom attempt to change.10 This is a concern that has been growing the longer I have been in the field.
My growing skepticism was recently tempered, though, by a new program designed to address some of these issues. My collaborators and I recently received funding from the Mercury Project—a global consortium of researchers working on improving public health interventions. Our specific project is focusing on social and logistical factors that contribute to inequities in vaccine uptake. We are, of course, grateful for the funding to do the research, but the money is not what prompted me to write this post. It’s the other things that the Mercury Project is doing that excites me about its potential to address both the crisis of evidence and crisis of relevance that have generated so much discussion over the past decade.
First, rather than allocate funding and leave it to each research team to try and figure out what might or might not be helpful, the Mercury Project took a different approach to building a base of rigorous and relevant evidence. Each team submitted research proposals, as is typical with other funding mechanisms, but the decision to fund was not the end of the feedback process. Before projects got started, the Mercury Project brought each funded team together at a convening with researchers, methodological experts, representatives from communities that would be affected by the research, and policymakers who might ultimately use the research, all came together to give constructive and critical feedback on each research design to ensure that the projects would not only meet high evidentiary standards, but would also be designed in ways that would produce useful evidence for relevant stakeholders.
One of the things I found particularly helpful from the convening was hearing the perspectives of policymakers. I often read papers in which scientists conduct studies, write them up for academic journals, then end their paper with a paragraph about what policymakers should do with their findings. Because many social scientists have no training and limited experience working with policymakers, and therefore do not have insight into how the policymaking process works at different levels, such statements at the end of papers often have limited utility—they make recommendations that, frankly, do not make sense given how policy (and practice) actually work. Because of that, it was helpful to have policymakers at the table as the research was being designed so that they could give direct feedback about what kind of evidence would (and would not) be useful. That feedback allowed each research team to ensure we were measuring relevant variables, weighing benefits and costs of different approaches appropriately, and more generally, thinking critically about the theoretical and practical significance of the work we are doing for both the scientific community and the broader societies that would be affected by our work.
Another invaluable aspect of the Mercury Project approach was the diversity of the teams and other stakeholders that were brought together for the convening and broader work. The Mercury Project teams hail from 20 different countries and are doing research in 17 different countries, as well as online. One of the benefits of having such diverse teams is that the experiences that people bring with them from the lives they’ve lived in a variety of places provide tremendous insights into factors that are important to consider for the research to be done well, and that are important for figuring out whether and when research generated from one context can be applied to another. Social scientists and statisticians have been writing about the importance of understanding heterogeneity in order to improve both our theories and their practical relevance.11 But I have never been in another context that crystalized those ideas for me more clearly than the Mercury Project convening. For example, one of the conversations that I am still thinking about weeks later, is a conversation I had with other researchers, practitioners, and policymakers about why a health intervention that works really well in one country might not in another. That conversation forced us to think through the structural, cultural, and political processes that affect the effectiveness of health interventions, factors that are important to understand to generate good theories of health behavior that could serve as useful theories of change.
The large epistemic and practical problems that the social sciences have been debating and discussing will not be solved by scientists working in isolation in ivory towers. The solutions require programs and structures like the one I have described in this post to bring together researchers, community members, practitioners, and policymakers to think critically together about the kinds of knowledge we create,12 and the implications of that knowledge for society. If we create more opportunities for engagement and collaborations like the ones I just described, then maybe, just maybe, after decades of talking about these issues, we might make substantial strides on improving both our scientific evidence, and its relevance, at the same time.