(Illustration by Caroline Gamon)
The funding landscape is currently dominated by two trends: the old way of heavy one-size-fits-all grant-reporting requirements and the new way of trust-based philanthropy, involving funding with no strings attached. Neither approach supports the necessary iteration and learning required for developing and scaling effective solutions.
The problem with heavy reporting requirements is well-known. Implementers often face excessive demands and are required to report against standard indicators and targets that often focus on program reach. This obligation pushes some to distort their programs to align with the expected indicators and others to expand programs before getting any assurance that they work. One of the most troubling issues is the tendency to stick to a predefined plan, rather than developing the most impactful approach through testing, learning, and adapting to challenges and opportunities.
On the other end of the spectrum is trust-based philanthropy. This model has numerous advantages, such as allowing organizations to focus on providing and improving services instead of on reporting. Despite these advantages, some funders, particularly public donors, have to require a level of accountability that makes this option challenging. Even without the accountability factor, some argue that the aversion to reporting numbers in trust-based philanthropy is an overcorrection, as Kevin Starr argued recently in this magazine. Finally, many funders also want to improve future grantmaking or generate ideas and evidence that will benefit the sector. For these reasons, eliminating reporting requirements may not always be an option that a funder will want to consider.
At Innovations for Poverty Action (IPA), our Right-Fit Evidence Unit advises funders and implementers on their approach to monitoring, evaluation, and learning (MEL) to help them achieve greater impact. When funders seek guidance on how to support their grantees in developing and scaling up effective solutions, we often encourage them to take a different, more adaptive approach. Toward this end we have created the Stage-Based Learning Framework to align funders’ reporting and evaluation expectations with programs’ needs as they mature. The approach cuts out unnecessary monitoring and evaluation activities and allows learning needs to evolve as interventions progress toward scaling.
What Isn’t Working
Over the last eight years, we have worked in more than 20 countries with partners across sectors, and we consistently observe programs skipping over or rushing through critical learning steps.
Consider the following fictional but typical scenario: A funder is committed to addressing employment challenges in Africa. Knowing that a lack of skills hinders business growth and development, the funder partners with a large entrepreneur network in some country—an NGO we’ll call Rising Horizons—that wants to run a training program focused on building the skills of small-business owners in its network. With the funding, the NGO hires instructors to provide business skills training to 3,000 small- business owners. The program involves monthly sessions over a year, covering all the business basics: accounting, financial management, marketing, HR management, and formalization.
As part of its commitment to evidence, the funder pays for a randomized controlled trial (RCT) to evaluate the training program’s impact on business outcomes six months after the training.
This may sound like painstaking work, and it is. But rushing through the process by skipping steps often leads to failure, missed opportunities, and wasted resources.
Just a few months into the training, Rising Horizons notices that trainees are not very engaged in the sessions and hears that they aren’t applying the concepts being covered. But Rising Horizons carries on with the plan, feeling the obligation to run the training for one full year and complete the impact evaluation.
Almost two years after the training started, the data show that it had no significant impact on business profitability or growth. The funder stops supporting the program and the program falls apart.
Unfortunately, this outcome is typical for programs that expand prematurely without the appropriate learning efforts, although the problem can take different shapes for different funders and implementers. Often they are taking a leap of faith that the idea will work, without affording space to get feedback, test, and adjust before large-scale implementation. Sometimes it’s about asking programs to jump straight to an impact evaluation before ironing out kinks. Sometimes it’s about expecting to go straight from a successful proof of concept to scaling to new contexts or implementers, without being deliberate about adaptation.
The Better Way
After seeing such scenarios repeatedly, we came up with a new framework to guide funders and implementers in moving interventions along the path to scale. This framework departs from one-size-fits-all requirements for monitoring, evaluation, and learning (MEL) without ditching metrics altogether.
The framework has five stages:
Ideate | This initial stage is about understanding the problem and coming up with a potential solution. This step may include activities such as needs assessment, literature review, or user testing.
Refine | The second stage is about ironing out the kinks of the intervention through small-scale piloting, with the aim of achieving outputs and early outcomes, such as changes in knowledge and practices, that move quickly and can therefore inform those rapid and low-cost iterations.
Prove | After finding a version of the intervention that produces the intended early outcomes, this stage involves an actual impact evaluation. This is the time to test for final outcomes (e.g., for the business owners, profitability and growth) and to assess cost-effectiveness.
Adapt | If the results are positive, one can start talking about scaling up. This step typically requires adjusting the intervention to new contexts and/or for new implementers.
Scale | Once a proven program is operating at scale, learning activities can focus on more typical continuous monitoring to ensure quality implementation.
To illustrate the framework, let’s return to the Rising Horizons scenario. Imagine that the NGO, rather than jumping to delivering the training to thousands and running an RCT, was encouraged by its funder to ideate and refine the program first. Suppose the funder encouraged Rising Horizons to iterate as much as needed, simply expecting a report on the level of success of the iteration of the training they would get to at the 18-month mark.
Further imagine that as a result, during that initial ideate stage, Rising Horizons used a needs assessment to understand what skills the business owners actually lacked and learned that they effectively had a lot of the basic business skills. Suppose Rising Horizons also reviewed emerging literature indicating that well-delivered training on soft skills, such as communication, interpersonal skills, entrepreneurial mindset, and personal initiative, have had promising impacts on small businesses.
Then imagine that Rising Horizons refined the design of the program: For more than a year, they tried out different iterations at a small scale, testing out hard-skills training, soft-skills training, and both. For each iteration, they gathered data on engagement and on knowledge gains and business practices. (It takes much longer and a proper impact evaluation to measure whether these changes translate into higher profits or business growth, and it was not yet time for that.) They refined both the hard-skills and soft-skills versions of the program, improving the curriculum and preparing the trainers to teach it well.
Suppose further that once Rising Horizons reached satisfactory levels of knowledge and practice and felt ready for proving impact, the funder agreed to fund an RCT to measure the impact on business outcomes, assessing the hard-skills program, the soft-skills program, and both together. The NGO learned that the soft-skills training had positive impacts, while the hard-skills training did not. The funder then supported Rising Horizons to expand the soft-skills program to thousands of business owners nationwide.
Looking forward, imagine the funder seeking to expand the approach to a neighboring country, only to realize that it lacked a large network like Rising Horizons. Only a government agency has the mandate, funding, and staff to support small businesses in this way. The funder decided to help set up a partnership between Rising Horizons and that government agency to adapt the concept to this context. An MEL partner was also brought on to help with prototyping and iterative pilots in several districts. Through this process, the partners ended up simplifying the program content, adjusting the delivery mechanism, and integrating the mechanism as part of an existing program of that government agency.
After the trainees demonstrated large gains in knowledge and practice in this new context, the program was scaled up across this new country, with Rising Horizons and the MEL partner staying on for some time to help ensure quality of implementation. The program then continued to train new batches of entrepreneurs nationwide after the partners left.
This may sound like painstaking work, and it is. But rushing through the process by skipping steps often leads to failure, missed opportunities, and wasted resources.
All funders interested in scaling effective solutions want to use data and evidence to increase the effectiveness of their investments, but it can be hard to know what kind of learning effort to suggest and to fund, and when. Based on working with so many funders and implementers over the years, our hope is that this framework can provide useful guideposts for developing right-fit learning plans and building stronger programs to meet the needs of people living in poverty.
Read more stories by Loïc Watine & Lucy Rimmington.
