A magnifying glass over a student sitting at a school desk (Illustration by Nathalie Lees) 

A majority of all children in low- and middle-income countries (53 percent) cannot read and understand a simple story by the end of primary school. This global poverty in learning threatens to undermine the achievement of all the United Nations’ Sustainable Development Goals—the 17 priorities for global prosperity that UN members committed to achieve by 2030. To make progress, governments and donors alike will need to commit more funding toward improving learning, but also ensure that every dollar committed is used as effectively as possible.

The global education community is reaching a consensus about the importance of greater generation and use of evidence, to better inform funding and policy decisions. A range of development agencies are putting significant resources into expanding their evidence and knowledge initiatives, such as GPE’s Knowledge and Innovation Exchange (KIX), or the Global Education Evidence Advisory Panel (GEEAP) convened by the World Bank and the UK’s Foreign, Commonwealth & Development Office (FCDO).

This evidence movement was galvanized by the 2019 Nobel Memorial Prize in Economic Sciences awarded to Esther Duflo, Abhijit Banerjee, and Michael Kremer for their work adapting the method of randomized controlled trials (RCTs) to the field of global development. The prize committee noted that the use of RCTs has “considerably improved our ability to fight global poverty” and “transformed development economics.” 

Although funders and policy makers generally agree that evidence-informed programming is important, they often underestimate just how important it is. We reviewed the evaluation data for 71 programs compiled by the GEEAP in a comprehensive search of publicly available evidence around the world, to see how they compared in terms of cost-effectiveness (i.e., impact on learning for every dollar spent). 

The takeaways were striking. Around half of these studies found that the programs evaluated had zero or negative impact on learning and were therefore not cost-effective by definition. Of those that had a positive impact, the weighted average impact per child per dollar spent of those in the top quintile was approximately 100 times greater than the impact of the lowest quintile and 9 times greater than the median of the available studies.

There are several important conclusions that we should draw from this data. First, interventions that seem thoughtfully designed can have counterintuitive results, and so we must follow the evidence of what is actually working, not what we think will work.

Second, the value of designing the right interventions is not trivial or marginal—it will transform the impact on the lives of beneficiaries by an order of magnitude. Whether or not government and donors succeed in identifying and supporting the most cost-effective programs in the local context will dwarf any efficiency improvements that could be made in other areas that often garner significantly more attention, such as squeezing out modest savings from procurement or program management costs.

Third, we have an enormous opportunity to improve the impact of our investments in education systems. Doing so will be absolutely critical, if we are to have any chance of addressing the global learning crisis.

One way we can seize this opportunity is by using outcomes funds. In an outcomes fund, implementers are contracted to achieve certain pre-specified results for beneficiaries, and payment is then based directly on the achievement of these outcomes (rather than the activities, which may or may not have the desired impact). More important, such funds focus attention on evidence of effectiveness, because payment depends on what works. By systematically generating context-specific evidence about what works and only rewarding the impact, outcomes funds can help funders and policy makers navigate the minefield of program effectiveness and shift funding toward the most effective approaches.

Local Context Matters

The reader may consider the stark numbers about impact and momentarily wonder “Can we simply pour all of our development dollars into the top-quintile interventions, using this global evidence base?” Sadly and predictably, strengthening education systems is far from this simple, for at least two reasons.

First, interventions in the top quintile are often very low cost and address specific issues rather than deliver systemic improvement. For instance, one program involved providing information to communities in Madagascar on the earnings gains they could expect from education, which led to a significant increase in motivation and student performance and cost very little. But once you’ve done this, you can’t do more of it in the same communities and expect benefits to accrue at the same rate. Nor can you expect to do this in a place where communities are already aware of the benefits of education and expect to see the same results.

This point brings us to the second reason: Evidence about the effectiveness of interventions is difficult to generalize across contexts. While there may be a generally consistent set of conditions required for children to learn, the barriers to learning that need to be addressed will regularly be different from one place to the next. This problem (which experts call a lack of external validity) can be driven by a range of idiosyncrasies at national and local levels, including the focus of historic investments, nation-shaping crises such as civil war or epidemics, cultural norms, and many other factors. The importance of context is also clearly borne out in the evidence data we reviewed; the cost-effectiveness of a particular intervention varies widely from one country to the next.

Advocates of pay-for-success finance (in which funding is tied to independently verified results) will commonly cite certain arguments in favor of this approach, such as increased accountability and focus on the desired impact for beneficiaries, increased flexibility to adapt programming based on what is and isn’t working in the local context, improved transparency of performance, and, ultimately, greater impact and value for money for donors. Outcomes funds build on this concept by contracting multiple implementing partners under a common funding framework, lowering transaction costs, and allowing multiple interventions to be tested simultaneously. 

However, the generation and use of context-specific evidence is an often-overlooked rationale for outcomes funds. When they are designed well, the benefits of evidence extend well beyond just determining payment.

I cofounded the Education Outcomes Fund (EOF), together with former Tunisian cabinet minister Amel Karboul, impact investing pioneer Sir Ronald Cohen, and former UK Prime Minister Gordon Brown, in order to scale up outcomes funds for education and skills, in partnership with governments around the world. The generation and use of context-specific evidence runs strongly all the way through the EOF model, from the design of programs, to the selection of interventions and implementing partners, to the data-driven adaptation and course correction during implementation, and finally to the determination of what to replicate and scale at the end of the program.

In EOF’s first programs in Ghana and Sierra Leone, an annual RCT will be conducted by an independent evaluator. Implementers will use this evidence, as well as their own performance management data systems, to adapt and improve their interventions’ impact, with a consistent focus on what is and isn’t working to improve learning and the lives of beneficiaries. Accountability for outcomes (rather than activities) shifts decision-making closer to the front line, providing the flexibility and incentives to tailor interventions to the local context and to avoid the rigid top-down programming that leads to ineffective efforts.

EOF will make payment in proportion to the impact on learning as determined by the RCT, up to a pre-agreed cap. Investors typically (but don’t necessarily) pre-finance the intervention in a structure known as an impact bond and are only repaid based on the rigorous evidence of outcomes achieved. This policy tightly aligns the incentives of implementers to the beneficiary and transfers the financial risk of success to the private sector, which must absorb the cost if results are not delivered.

Through this approach, outcomes funds make significant, new, context-specific evidence publicly available to policy makers that shows which interventions have delivered the greatest impact, at what cost, and why. A broader program evaluation (or learning agenda) also ensures that we have a holistic understanding of the impact of each intervention, beyond the payment metrics.

For example, in Sierra Leone the outcomes fund will contract NGOs to support public primary schools to improve learning levels, through a variety of interventions. The government has committed to scaling nationally the most effective interventions from the 5 clusters (of approximately 60 schools each) in the EOF program. Interventions have been restricted to those that will be affordable for the government to scale, ensuring a strong, viable, and predesigned pathway from evidence to policy.

Beyond the Data

Not everything that matters can be measured, and the use of data and evidence must always be understood in that context, recognizing the broader objectives of a program and the more complex lives and needs of the people we hope to serve. An excessive focus on measurable test results may sideline the importance of other critical aspects of a child’s development, such as their social and emotional skills and well-being. Schools provide broader value to families, communities, and society.

By design, impact bonds and outcomes funds demand that some results be prioritized over others; in a world of trade-offs, creating a strong focus on what matters most is justified. But those designing them must make every effort to ensure that the payment metrics broadly measure what matters and, to the extent that is not possible, find other ways to ensure that unintended consequences are avoided. As the field advances, we will continue to learn how to ensure that evidence is being used to achieve the best possible impact with limited available funding, while also actively managing the risks and limitations of this approach.

Read more stories by Jared Lee.