Globally, there is a push for evidence-based practice and policy in poverty reduction and development strategies. The randomized control trial (RCT) methodology, drawn from clinical trials, has in recent years been applied to issues as diverse as health practices, changing gender norms, and access to finance through field experiments. The rigor of this methodology allows scientists to move beyond correlations (x and y seem related) and make causal statements (x increases y), thereby leading many to regard it as the “gold standard” of evidence.

I recently attended the Urban Services Initiative matchmaking conference, organized by the Abdul Latif Jameel Poverty Action Lab (JPAL), where I met researchers and practitioners working on important aspects of poverty alleviation. Certainly our discussions were intellectually stimulating, and I learned a lot. But I found myself thinking more about the limitations of the RCT method than its value. I walked away with a few new reservations:

RCTs are context-specific.

Just because creating report cards for local politicians changed voter behavior in Delhi and urban Uttar Pradesh doesn’t mean it would do the same in Dhaka. Results are rarely universal, and program leaders are left with evidence that an intervention “works some places but not all,” and so fall back on making a judgment call.

RCTs tend to answer meta-level questions.

Example: A medical trial may aim to determine whether certain drug dosage combinations improve a given condition. The study will report whether that combination worked or didn’t work, but it won’t tell you how well other combinations (or single therapies) might work. Similarly, RCTs (particularly impact evaluations) offer answers to only the specific program design tested. At BRAC, we’ve tested the comprehensive bundle of goods, services, and social engagement that we provide to the ultra-poor. Our evaluation found that it is effective in reducing extreme poverty, but the research doesn’t tell us whether the program would be just as effective with fewer household visits from our staff, or whether adding other components would make a greater impact on education. Additional research, often qualitative, is needed to answer specific questions, including why the program worked better for some than others.

Or, RCTs answer small questions that are only part of the puzzle.

Are you enjoying this article? Read more like this, plus SSIR's full archive of content, when you subscribe.

Some RCTs look at much smaller questions, such as: Should a toilet in a slum be community-operated or privately operated? The answer is quite specific and actionable. Few academics, however, have the patience and interest to answer all the questions that matter to a practitioner who is considering adopting a model.

RCTs are non-additive.

Simply, RCTs almost always yield pieces of unique puzzles, rather than answers that add up to a larger certainty. Even accounting for the caveats of context and program, RCT knowledge cannot be neatly combined with other study results. Even “gold standard” results, taken together, do not produce a full playbook.

Sometimes trends are clear across a number of RCTs, and occasionally meta-analyses bringing together several study topics in the same area are written to try to aggregate the information (and not without controversy). But usually, academics are drawn to the big unknown, not the practical questions that emerge in the wake of other studies. Little attention is paid to who implements the interventions and how their organizational characteristics contribute to the observed impact (or lack thereof).

RCTs don’t tell you why.

RCTs tell you only whether or not something works, and how well. Why it does or doesn’t work is, from the researcher’s perspective, up for interpretation, but for practitioners, it is critical to adopting a new practice. While results are often described as surprising, in reality, RCT findings rarely surprise the research team (though they may surprise others, as a recent SSIR poll indicated). Prior to launching a full study, most investigators have done a number of focus groups, analyses on existing data, and pilots to make sure that their predictions are well supported. A key input is the experience, intuition, and wisdom of practitioners. These types of due diligence are critical to finding funds and committing to a full-scale RCT (the “real” research). For practitioners, the insights from the pre-research phase, which are rarely shared formally, are the most useful and immediately applicable knowledge gained.

RCTs are an incredible tool for answering some types of questions and producing some evidence. However, they are a gold standard only to academics, not practitioners. The focus of research should be to shed actionable insights on how to achieve impact—in our case, poverty reduction—not methods. The push for the creation and utilization of evidence is positive, but not if it means marginalizing existing operational wisdom. Evidence has limitations and must be wedded with creativity, experience, and operational know-how to create and scale effective programs.

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Maria A. May.