Evidence produced from research and evaluation plays an important role in shaping how we address international development challenges. For example, conditional cash transfers are now used all over the world as an important lever in the fight against poverty, thanks in part to the rigorous (and ongoing) evidence provided by the Opportunidades program evaluation in Mexico, which is showing how the program is affecting outcomes in schooling, health, and consumption among participants. Likewise, research using financial diaries—captured in the book Portfolios of the Poor—changed the financial inclusion and microfinance sector by revealing that the poor are adept money managers leading complex financial lives. This kind of insight continues to inform new products and services, in areas such as savings and insurance, for those living in poverty. 

The need for more and better evidence has never been more crucial as we confirm an ambitious set of post-2015 Sustainable Development Goals. Future investments in international development activities will need specific direction to meet the challenges the goals pose—and evidence can and should help guide interventions.

Unfortunately, the evidence researchers and evaluators produce often doesn’t deliver on its promise. Why does research and evaluation in international development so often fail to generate the kinds of practical and compelling insights needed to drive programming and policy? 

This is a question we’ve been thinking about recently at the MasterCard Foundation to improve how we fund research and evaluation work, and we’ve recognized a need to re-think and better articulate our rationale.

Are you enjoying this article? Read more like this, plus SSIR's full archive of content, when you subscribe.

What needs fixing, exactly? As it stands, those who manage or participate in international development work have very different needs and incentives than those who conduct or fund research and evaluation. Accountability for how an organization spends money, for example, may drive donors, while the pressure to produce scholarly articles and citations drives many academic researchers. Meanwhile, practitioners and implementers need practical advice on how to best design and implement interventions in a highly contextualized environment—something many research and evaluation efforts fail to provide. This misalignment leads to gaps in the relevance, timeliness, and utility of interventions. 

As a result, methodological approaches, at times, generate a limited kind of knowledge—“proof” that a particular intervention helped, say, raise income, improve school performance, or reduce vulnerability. For those working on the ground, that kind of proof speaks only to what worked in a particular context. It tells us little about whether the success of the program is replicable, or how government or markets can scale different components of a successful intervention. Finally, and most importantly, we don’t often engage, listen to, or act on local knowledge and expertise in ways that enhance a community’s ability to shape the programs and policies that affect them. 

In our newly published “Research and Evaluation Policy”, we seek to avoid these traps by articulating an approach to research and evaluation that is relevant, ethical, and applied. We see three directional steps we can take to do this better: 

1. When commissioning research or evaluation, we need to better understand the knowledge gaps and needs within thematic areas. For example, before investing in research on education and transferable skills (otherwise known as “soft skills” or “life skills”), we worked with the International Initiative for Impact Evaluation, or 3ie, using its new Evidence Gap Map tool to document existing data on peer-to-peer learning models, mentoring, experiential and participatory learning, and other areas. We now also understand where there are gaps in the field—including career counseling, learner-centered teaching models, and teacher training and support. This is helping us and others better prioritize and direct our research investments.  

We can also more clearly identify who might use the data and insights generated through these activities, and involve them in shaping the research. In jobs training programs, for example, we need to work with private-sector employers, as well as training colleges and local governments. Clarity of purpose and audience—and the ongoing engagement of those who use this knowledge—are important tenets of applied research and utilization-focused evaluation.   However, these ideas seem stubbornly difficult to implement consistently. Improving practices requires that funders, researchers, and governments collaborate much more deliberately on research design, interpretation, and dissemination to meet the unique needs of diverse groups—especially those we intend to benefit. We must orient time and resources toward these activities, over and above the usual primary focus on data collection and report writing. 

2. The knowledge our research and evaluation efforts produce must align with what participants and practitioners need. The dramatic increase in impact evaluations, particularly using experimental or quasi-experimental methods, is welcome in driving an increased focus on evidence. But linear methods, isolated control environments, and simplistic questions of “what works” can mask the much more nuanced and important questions of “what works, for whom, and under what conditions”? We need to address this larger set of questions to design and tailor effective programs and policies that reach people who are truly excluded. This requires rigor in both quantitative and qualitative methods, integrated and triangulated at multiple levels with more-detailed segmentation analysis. It also requires greater acceptance of a wider set of tools to assess what factors contribute (or fail to contribute) to improvements in people’s lives. The BetterEvaluation collaborative and the UK’s Department for International Development (DfID) are exploring, innovating, and documenting a variety of proven and new approaches that provide robust alternatives or complements to experimental or quasi-experimental methods. 

3. We need to ask: Who is producing the knowledge, and whose knowledge do we value? We must get better at utilizing and reinforcing local knowledge and expertise, and ensuring that we return data to communities. This includes, for example, increasing support to local research and evaluation centers of excellence, as well as including national or regional experts in lead roles. It also means making better use of innovative and participatory methods that elevate the voices and agency of people, empowering them to use data to better articulate their views, influence decision-makers, and access opportunities. Finally, we need to go beyond the basic or minimal ethical principles in research and evaluation, and think more creatively about how to improve the experience of participants. The Lean Research initiative that Tufts University and MIT D-Lab are implementing is a promising step—including the stipulation that participating in research should be a “delight”; in other words, the opposite of extractive or burdensome. 

As a funder of international development programming, we have an important responsibility to ensure that research and evaluation are carried out in service of real, on-the-ground impact. The core objective should be to inform decisions that improve people’s lives. It is encouraging to see so many initiatives and conversations embracing this approach—it’s time to seize the moment and push further. The achievement of the new Sustainable Development Goals may well depend on it. 

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Joe Dickman & Samir Khan.