Pascaline Dupas had a cool idea. She would offer free insecticide-treated mosquito bed nets to pregnant women in western Kenya who signed up for prenatal care at government clinics. Not only would their children get to sleep under a bed net, but once in prenatal care they would also get malaria prophylaxis and, if needed, treatment to prevent fetal transmission of HIV. An economist affiliated with the Abdul Latif Jameel Poverty Action Lab in Cambridge, Mass., Dupas maintained a healthy skepticism as to impact, and made sure to collect the data that would show any change in child mortality.

Her idea worked. Sign-up for prenatal care skyrocketed, and Dupas’s direct observation showed that 85 percent of the women’s children slept under nets. Several randomized trials in Kenya had shown that sleeping under a net reduces child mortality by 20 percent, so Dupas could make a persuasive case that the cost per child’s life saved was about $600 (the cost of the intervention divided by the additional number of children sleeping under nets, divided by 0.2 children saved per net). She had kept Kenya’s Ministry of Health in the loop, and as a result of her lobbying efforts, clinics in western Kenya will soon distribute free nets.

Dupas’s work is a prime example of a scalable solution—an idea that can grow to make a big dent in a big problem. At the Mulago Foundation we make philanthropic investments in scalable solutions for health, development, and conservation in the Third World, and we invested in Dupas’s work when she was a Rainer Arnhold Fellow in the foundation’s program to help social entrepreneurs turn good ideas into lasting change that will go big.

Why did we think her idea could go big? Or more generally, how can you tell whether an idea is scalable? We’re a small shop, so we had to find a simple way to screen for scalability. A while back, our friend Martin Fisher put forward the idea that scalable solutions have four critical characteristics: They must have real impact, and must be cost-effective, sustainable, and replicable. Fisher used his idea to build KickStart, an iconic organization that has pulled thousands of African farmers out of poverty, and we adapted his criteria to guide our philanthropic investments.


If an intervention can’t demonstrate real impact, it shouldn’t be scaled up—period. We don’t invest in organizations that don’t measure their impact: They’re flying blind, and we would be, too.

We don’t want a flood of data, either. We just want the right data. To ascertain your organization’s real impact, you need to measure the right thing, measure it adequately, and make the case that change was in fact due to your efforts.

Start with knowing your real mission. If you’re distributing mosquito nets, for instance, what matters is not how many nets you get out the door, but whether malaria rates and mortality drop. If you’re making microloans, what matters is not whether people pay you back, but whether they make more money. Impact isn’t the activities you completed, the services you delivered, the attitudes or even the behavior you changed; it’s the result of all of that. And it isn’t vague terms like “lives affected,” or a kitchen-sink compendium of all the possible benefits you can think of; it’s focused indicators that capture the real mission.

Dupas’s real mission was to save kids’ lives in Kenya. She used proven survey methods, rigorous before-and-after data attributable to her intervention, and a big enough sample size. Because randomized trials had made the connection, she could use a verified behavior—kids sleeping under nets—to make an accurate estimate of impact. Better yet, we know her estimate is a conservative one, since she didn’t even count the lives saved by the prenatal care.

Not everyone needs to do a randomized trial. But you do need a way to show the change that you effected. Sometimes you can get away with matched controls, or even simple before-and-after data. Dupas could prove the increase in bed net use, but she also needed data from randomized trials, because there are many other causes of child mortality.

Not all of our start-up organizations have proven and attributed impact yet, but all have identified what to measure and have a process in place to measure it. In fact, when they’re needed, we like to pay for formal impact studies.


Even with real impact, a solution still won’t scale if it costs too much. Dupas’s intervention was cheap: $600 per life saved is about half the cost of some of the best interventions out there. It’s a solid, conservative number that gives us a meaningful sense of value in light of the real mission.

We really want a number. It doesn’t tell you if something is worth doing, just what it costs to do. Cost per impact is our social return on investment (SROI), the single most important number to guide our decisions and the best indicator of our own impact.

No two settings are the same, so we have to evaluate each number in context, but sometimes we can use it to make meaningful, albeit rough, comparisons. For example, we benchmark nongovernmental organizations working to get one-acre farmers out of poverty—Vipani, One Acre Fund, KickStart, and IDE, for instance—by calculating the ratio of average three-year increase in farmer income to donor cost per farmer.

With a mature organization like KickStart, or a fully completed project like Dupas’s, the SROI calculation can be pretty simple: Divide the total amount expended by the total impact. For newer organizations, that calculation doesn’t really work because of high research and development and start-up costs. In that case, what we ask for are projections that can at least stand up to scrutiny.

When we emphasize SROI, we don’t have to care about organizations’ percentage of overhead or whether they sometimes fly business class. As long as we have a credible number that we like, we know we have good value and can let them decide how money is best spent.


What a sadly abused and weirdly indispensable word! For Mulago, sustainable means that impacts last and continue to grow once the primary intervention is complete; in other words, a solution passes the “walk-away test.” We look for two interrelated things: the behavior that drives impact lasts, and there is an exit strategy for donor subsidies.

The first is about systematically looking at the incentives that drive important behaviors. Are they adequate and will they last? Our experience is that coerced behavior is the least stable (“we locked people out of the forest”); self-reinforcing behavior is the most stable (“we helped them make money from a healthy forest”); and compensated behavior is somewhere in the middle (“we paid them to take care of the forest”). The crucial behavior in Dupas’s project was sleeping under a net—about as self-reinforcing as you’re going to get.

As for exit strategies, there are only three kinds. First, you can hand your intervention over to the government to deliver. Dupas, for instance, designed her intervention so that government clinics would take it up, and she brought the relevant decision makers into the process to make sure it would happen.

Second, you can embed the intervention in the market. Kick-Start uses donor funds to market its affordable moneymaking technologies, leaving profitable businesses and supply chains in place to continue generating new impact.

Third, you can create a self-perpetuating mechanism for behavior change that allows you to move on. World Relief has an intervention called “mothers’ care groups” wherein peer-to-peer teaching of high-impact household health behaviors creates durable new social norms like exclusive breast-feeding and oral re-hydration that continue to save lives when a project is over. (True self-perpetuating interventions are rare, however.)

Like everyone else, we like to invest in the start-up costs of organizations that can eventually expand into new settings without subsidies. Despite a lot of hoopla, however, few can do that while meeting the needs of the very poor. In the settings where we work, government and market failures must be overcome, and even organizations that no longer need a subsidy in one setting often need philanthropic capital to innovate and extend into new settings. When the impact return on that capital is high, we’re happy to put in our money.


Dupas’s intervention passes our “replicability” test, too: It is focused by a clear central idea, simple and systematic enough that others can do it, and broadly adaptable to a wide range of settings, and it can leverage something big to drive growth—in this case, government health systems.

We won’t pretend that scalable solutions are easy to find, but in our experience they aren’t that hard to screen for. The better we all get at channeling money to those who really know how to create change at scale, the more we’ll hasten the day when philanthropy starts to operate as an efficient market for impact. And when that day comes, we might just save the world.

KEVIN STARR directs the Mulago Foundation and the Rainer Arnhold Fellows Program. He also practices rural emergency medicine part time.

Tracker Pixel for Entry