Illustration of shapes going into a funnel and smiley faces and thumbs-up symbols coming out the bottom. (Illustration by Patrick Fennessy) 

Research papers on innovative social programs may be grounded in painstaking and pricey evidence collection, but what good are they if they go unused, not to mention unread? According to a 2014 study by the World Bank, nearly a third of the reports available as PDFs on their website had never been downloaded even once.

International research organizations have made some progress in ensuring that evidence generation goes beyond mere publication and instead reaches and informs decision makers. Over the past 18 years, our organizational strategy at Innovations for Poverty Action (IPA) has evolved from a narrow focus on evidence generation, to a linear understanding from evidence generation to dissemination, ultimately to a grounded and iterative approach of cocreation between researchers and end users of research

At its core, our strategy identifies the right opportunities for evidence to influence real change, partners with end users to answer their questions, uses a research tool kit that goes beyond impact evaluations, empowers local researchers and decision makers, and invests in localized data capacity to ensure that further learning can be sustained. We are no longer solely an evidence-generating organization, but rather an evidence mediator: We work with a variety of implementing partners to collect data and evidence and put it into proper use.    

More specifically, we at IPA apply an integrated framework of the what, who, and how of evidence use. In this way, we ensure that funding for evidence-to-policy work is less about production of papers and more about building meaningful, multidimensional partnerships that ground critical decisions in evidence of effectiveness.

By better understanding how evidence actually gets used, funders—from small foundations to larger ones, to government funders and multilateral development banks—must change the way they invest if they want to realize the potential of evidence. And evidence mediators, like IPA, need to put the quality and depth of our partnerships on the same level as the quality of the evidence that we generate if we want to ensure that evidence is actually used.

What, Who, and How

Through our work in more than 20 countries with partners at varying levels of experience with data, IPA has learned that encouraging evidence use depends on context, especially on what we call evidence readiness: the preexisting and ongoing experiences with evidence, data, and application of research to practice in each country, sector, or institution in which we seek to encourage evidence use.

Our evidence readiness framework ranges from contexts of unreliable data (working with a partner who has almost no access to reliable data) to contexts of rich data and evidence use (partners whose creation and use of data and evidence are regular and ongoing). Unsurprisingly, the vast majority of interested partners fall in a middle range. Partners who already have worked on some concrete examples of applying evidence to program and policy design have the further opportunity to build their own evidence generation and scaling of evidence-informed programs, thereby reducing the role of evidence mediators.

The framework teaches users not to treat each context uniformly and to see opportunities to pursue evidence use in a wide range of contexts. Because of contextual variation, the pathway to evidence uptake is not linear or uniform, but our experience suggests that finding high-impact opportunities (what), building the ecosystem to support evidence use (who), and leveraging targeted tools (how) can help focus efforts on impact.

The what of evidence use: Finding high-impact policy opportunities | Not every opportunity in international development research has strong potential for evidence use. IPA prioritizes opportunities that meet four criteria: an existing body of research to build on, an opportunity to influence important decisions, existing relationships, and existing funding for implementation.

For example, IPA has partnered with the Rwanda Education Board (REB) since 2014. Our prioritization framework applied particularly well when the REB—together with IPA and other partners—took the opportunity of centralizing teacher recruitment and rewriting its human capital strategy to incorporate both evidence and data. This opportunity met all the criteria. A strong body of cocreated evidence around performance contracts for teachers was already in place. In addition, IPA already had strong working relationships with critical people in Rwanda’s education ecosystem. What’s more, important decisions about policy were about to be made: Rwandan officials were preparing to rewrite teacher recruitment and deployment strategy. Finally, the evidence-based program would be cost-neutral in the medium term for the government, making funding for implementation a surmountable issue.

The who of evidence use: Equipping an entire ecosystem with evidence | Building a culture of sustained evidence use requires a broader approach than finding one particular evidence champion. It means engaging the whole ecosystem of relevant actors and emphasizing each one’s incentives for reaching their own impact goals. This ecosystem includes technical staff across departments and organizations, more senior-level ministry officials and political leadership, and multi- or bilateral funders and their government counterparts.

We have also found that partnering with researchers, policy makers, and practitioners from low-to-middle-income countries, who have the capabilities and insight necessary to generate and apply the most relevant evidence in their context, can accelerate the process. This strategy is typically more fruitful—and equitable—for evidence brokers than privileging their own perspectives or relying on “expertise” that is not grounded in the local context.

The how of evidence use: Clearing a pathway for evidence use, then equipping partners to follow through | Even when the correct opportunities for evidence use are identified and the right coalitions are built, failures to use evidence can sometimes occur when the relevant parties commit to evidence collection but fail to do the complementary work that would actually lead to evidence use. For example, researchers can focus on the causal mechanisms at work in an intervention but fail to collect data on crucial programmatic details about delivery and implementation. Or they can draw up a map of relevant stakeholders but fail to engage them or formulate action plans. Just as it is a mistake to think a single evidence champion can bring about transformational evidence use, it’s erroneous to think that partial investments in the how will bring about impact.

For example, IPA and a large group of researchers from Stanford University and Yale University ran a massive randomized controlled trial (RCT) last year on improving mask-wearing in Bangladesh to prevent COVID-19. The model we tested more than tripled mask-wearing, and that effect persisted beyond the intervention. Since this approach had the power to save thousands of lives at very low cost in the middle of a case surge in South Asia, we shifted from research to large-scale implementation very quickly. The first scale-ups—to four million people in India by the Self-Employed Women’s Association (SEWA) and to 81 million people in Bangladesh by BRAC—needed an urgent, easily deployable monitoring tool to understand if the program could work in different contexts and at scale, and to inform management along the way. If we had not invested in this monitoring, we may not have been able to persuade more partners to take this on—and we certainly would miss critical gaps in implementation at such a large scale.

Doing Better

Generating research—whether RCTs, data, or any other kind—is only half the battle and serves no purpose if the evidence collected isn’t used. The translation to evidence use—identifying the right opportunities, equipping the ecosystem, and using all the right tools—is currently both unstructured and underresourced. Funders and recipients need to commit to a full evidence-to-policy cycle, in which they have a plan for evidence use and are held accountable to the outcome. To achieve this goal, investments need to be partnership-focused, flexible, long-term, cost-effective, and based on data-driven learning about what works to spur evidence use.

Local partnership-focused | The organizations that create evidence are not always those who use it, and this mismatch can create tricky funding scenarios. But granting agencies can use their leverage to secure partnerships between evidence-creating and evidence-using entities and hold them accountable to completing an evidence-use cycle. They can also ensure that the funding for implementing an evidence-informed program doesn’t run out at the very moment that evidence supporting its use emerges—a Kafkaesque outcome that we have experienced all too often. Locally based evidence mediators can broker these partnerships, advocate for effective use of funding, and ensure that knowledge is not lost in the transition between its generation and use.

Flexible and long-term | Funding for evidence use needs to be outcome-focused. It should incorporate flexibility around short- and medium-term outputs, since the pathways to policy change are varied and intertwined, and it also needs to last over the long term to secure the impact sought. Windows for influence over policy can open and close quickly, often through events outside the control of funding recipients. Funding milestones that are too rigid pose barriers to making the most of data and evidence collection in shaping outcomes.

Evidence-informed and cost-effective | Funders and recipients should commit to learning what works for achieving evidence use and pursue evidence-informed, cost-effective evidence-to-policy models. Agencies that support evidence use in global development, such as USAID Development Innovation Ventures, Global Innovation Fund, and the new Fund for Innovation in Development, apply a return-on-investment perspective when thinking about their grants. Nobel laureate economist Michael Kremer, the scientific director of USAID Development Innovation Ventures, and his colleagues have done valuable research into which investments in development innovations have paid off.

We must build on such work by assessing the relative efficacy of various evidence-to-policy strategies. As far as we know, this learning and evaluating is not being done in a systematic way. As an evidence-informed development community, we can do better.

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Heidi McAnnally-Linz, Bethany Park & Radha Rajkotia.