There have been many thoughtful responses to our original post about GiveDirectly and unconditional cash transfers. These include one from Chris Blattman, who has done some of the most important and interesting research on cash transfers, one from GiveDirectly, another from GiveWell, and the last from Jeremy Shapiro and Johannes Haushofer, authors of the study we cited.

We gave this particular hornet’s nest a poke in the first place, and so we owe a response to the helpful points that they raise. It seems like there are three big points of contention:

1. Return on investment

This is one of those cases where perhaps everybody’s right. It’s all about your point of view. When they speak of the rate of return on investment, the GiveWell and GiveDirectly guys are talking about a family’s return on their investment of the cash windfall. From that vantage point, a 28 percent return on investment in a year is wonderful. It’s a perfectly valid way of looking at these results as long as the numbers are real.

Our point of view is that of the early stage, philanthropic investor. We’re supposed to be looking at the impact bang per philanthropic buck. We came up with our “3 years of additional income per donor dollar” as a short cut to measure both whether or not an intervention is working, and if it will last. In the chaotic world in which we invest, discounted cash-flow models built on projections and assumptions are almost always flawed. Shocks to the system—which have profound impacts on both the longevity of assets and projected returns—are very common. Think droughts where there is no irrigation, prime income earners that die of AIDS, machinery that breaks down and can’t be fixed, animals that die in the absence of vet services. If there is real data to support sustained and verifiable impacts beyond 3 years, we’re always willing to consider it.

Are you enjoying this article? Read more like this, plus SSIR's full archive of content, when you subscribe.

From the philanthropic investor’s point of view, then, a 28 percent return on $500 each year for 3 years would mean that a subsidy of $500 would lead to a 3-year impact return of $420. While we always take the target population and overall context into account, anything less than 1:1 is usually not enough for us to get involved.

Two points of view: It’s up to the putative donor to decide which is most compelling.

2. Is it experimental?

This too depends on one’s point of view. Certainly the idea of cash transfers has been tested in many different forms and settings. There is no doubt that most of them have shown demonstrable impact. The notion of cash transfers is no longer experimental.

However, the approach GiveDirectly takes is new; it works with a specific population in a specific new way. Presumably that is why they (admirably) set it up as a randomized controlled trial (RCT) with Innovations for Poverty Action (IPA)—as an experiment. What seems at issue here is whether the experiment is now over or whether we need to continue it to understand the degree to which cash transfers do or do not show lasting effect. If the most important question is the latter (and we think that it is), then the IPA RCT represents early midline results—results that, at most, indicate that it is worth continuing the experiment. As GiveWell puts it on its website: “While GiveDirectly has been accumulating more evidence on this question in addition to a recently released RCT studying its activities in Kenya, there is still limited evidence on the humanitarian impact of the type of transfers (large, one-time transfers) that GiveDirectly provides, particularly the long-term impact of such transfers.” We agree.

3. What is (good enough) evidence?

Given the complicated business of fighting poverty, this isn’t an easy question, and reasonable people can disagree on the answer. Since 2008, Mulago has given more than $1.5 million in unrestricted funding to J-PAL and IPA, and continues to support both organizations’ use of RCT’s—both to explore what works and turn good evidence into action. While we see the power of RCTs, we’ve also come to understand their limitations and believe that that there are sometimes better ways to systematically measure impact in growing, evolving enterprises that work in multiple geographies. It’s always messier and never quite as rigorous as we wish, but it is also true that the apparent precision and thoroughness of RCTs can sometimes mask flaws in design and methodology that limit their utility in the real world. While RCTs can get us statistically significant numbers, they don’t necessarily tell us anything about the significance of those numbers. As an investor in social impact, we need to understand both.

Finally, despite what responses to our original post imply, we don’t believe that a few days onsite with an organization somehow allows us to ascertain impact, nor do we go by what an organization posts on its website. We use field visits to dig deep into measurement and evaluation systems, understand context, and develop judgment. Field visits are just one (relatively small) piece of how we assess organizations and their impact.

Here is a note to conclude: With regard to evidence of impact in the social sector, there are still three camps of givers: 1) those who believe that peer-reviewed RCTs are required, 2) those who think that smart organizations can often do a credible job themselves, and 3) those who don’t bother with either. Sadly, the last camp still makes up the majority, and while those of us in the first two camps need to keep each other honest, our major focus should be to shrink the ranks of those who don’t even try.

Thanks, everyone.

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Kevin Starr & Laura Hattendorf.