Our organization, GiveWell, seeks to identify evidence-backed, underfunded, transparent charities serving the global poor. GiveDirectly is one of the best charities we've found by these criteria.

In a recent SSIR article, Kevin Starr and Laura Hattendorf compare GiveDirectly unfavorably to other nonprofits, and call into question the value and impact of unconditional cash grants. Part of their argument is simply misleading: They use a nonstandard, problematic metric to make strong investment returns appear weak.

Other parts of their argument (which relate to a previous post by Starr) reflect a deep disagreement between our work and theirs. We at GiveWell seek out charities that can show strong evidence of strong impact; Starr and Hattendorf appear to seek out charities that claim extraordinary impact, based on evidence that relies too heavily on personal observations they have made in the field.

We encourage Starr and Hattendorf to make the details of their impact claims public so that we can have a public back-and-forth about the quality of the evidence they cite, rather than being asked to defer to their intuitions.

A misleading metric for investment returns

Are you enjoying this article? Read more like this, plus SSIR's full archive of content, when you subscribe.

Starr and Hattendorf are unimpressed by returns on cash grants. They examine one study—examining a randomized large-scale cash transfer program in Uganda—and write that " ... the unconditional cash grant produced $1.03 of additional income over 3 years per donor dollar, essentially a wash." They also say that GiveDirectly's report shows that "the income return per donor dollar on a $500 grant is less than $1. Ouch." Our own report frames things differently, claiming 30-39 percent annual returns in the first case and 7-14 percent returns (for a particular investment) in the second. Where does the discrepancy come from?

The figure Starr and Hattendorf cite is not standard for assessing returns. It comes from "calculating the amount of additional income over 3 years, divided by the amount of grant money it took to generate it: ‘the income bang for the donor buck.’" They do not explain why they chose 3 years as the relevant timeframe, and they seem to assume that the entire amount of the cash transfer—including whatever durable assets are used to generate increases in income—vanishes after this time. To illustrate how problematic this approach is, consider:

  • If I invested $100 in the stock market and realized $10 in gains each of the following 3 years, most people would consider me lucky. Starr and Hattendorf would calculate a return of $30 on an investment of $100—or only $0.30 per dollar invested. This ignores the original $100 that I put in.
  • More concretely, imagine that (to use numbers cited by Starr and Hattendorf) someone bought a metal roof for $400 and that roof saved them $37 per year on upkeep. After 3 years, they would have saved about $111 and would still own the metal roof, which likely would have retained much of its value. To complain, as Starr and Hattendorf do, that "… it takes a decade for people to realize any savings" ignores the fact that the metal roof is a form of savings, and saving via durable goods is quite common in the developing world.

The "income bang for the donor buck" calculation is flawed and inferior to the more common approach of estimating total discounted cash flows (including return of principal) relative to the initial investment.

Strong, evidence-backed impact vs. extreme, asserted impact

We found the recent study on GiveDirectly impressive. We wouldn’t have expected a one-year study of cash transfers to find significant impact on all measured indicators; because cash transfers can be used for so many different things, the effect on any one indicator may be hard to detect at realistic sample sizes, putting cash transfers at a disadvantage in these sorts of studies relative to interventions clearly targeting one indicator. But the study did show some impressive outcomes, including the improved food security Starr and Hattendorf discuss.

Starr and Hattendorf downplay this result, focusing on the fact that cash transfers did not completely eliminate food insecurity. They write: “This isn’t food security; it’s a little less food insecurity.” It seems that we have very different expectations for a charity.

Starr and Hattendorf point to a variety of charities that they claim can deliver far superior results; they say that One Acre Fund can make back a donation (via improved income) almost 4-fold within 3 years, that KickStart and Proximity Designs can make it back 10-fold in the same period, and that VisionSpring far outshines both with a 60-fold return within 3 years.

If these claims were true:

  • They would be far out of line with any impacts we're aware of from any rigorous studies conducted on anti-poverty interventions.
  • They would imply some serious for-profit investment opportunities.
  • They would seem to imply that even supporting One Acre Fund, KickStart, or Proximity Designs is a mistake when the figure associated with VisionSpring is so much more impressive.

In short, these are extraordinary claims. What evidence accompanies them?

The analytical cases we’ve seen from these groups are uncompelling, and for some, no case has been shared at all (details on our website). In a previous post, Starr implies that much of the relevant evidence relies on observations from personal experiences in the field. He also implies that sufficiently increasing the amount of time we spend in the field would make us believers in these figures.

We disagree, for the following reasons:

  • We've sent many different staff on a variety of extended, documented field visits and found them extremely valuable, but we've also consistently felt that such visits are better for raising questions than providing answers. There are simply too many ways that even extensive periods of observation can fail to provide the whole picture.
  • It is far from the case that people who spend more time in the field than we do agree with Starr and Hattendorf’s views. In fact, the people most associated with promoting more reliance on randomized controlled trials (often called "randomistas") have, by and large, spent enormous amounts of time in the field.
  • Rigorous studies on popular programs—for example, microfinance—have not vindicated the enthusiasm of the people with the most firsthand experience or the results of less-rigorous studies (and have never, to our knowledge, shown impacts near what Starr and Hattendorf claim).

We have a sense of what field visits can provide, and while we think they have an essential place in evaluation, we have no plans to make them our main activity.

Nonetheless, Starr and Hattendorf do have a possible path to changing our minds about their preferred charities: They can share the details of their impact claims, along with the data that went into them, the details of how it was collected, and explanations of which particular pieces require direct observation to appreciate. (They could even address the latter by providing photos and notes from their own field visits, as we do.) We don't require randomized controlled trials (our process makes extensive use of softer evidence), merely an open and detailed discussion of the case for impact, and its strengths and weaknesses, which we provide for the charities we recommend.

We would happily review such information. We would be thrilled to change our views and to promote top charities more extraordinary than we had guessed was possible.

Such an outcome would be well in line with a major big-picture goal of GiveDirectly: to establish a strong, evidence-backed "benchmark" by which to compare charities, and thus raise expectations for how strong a charity’s public case ought to be before one accepts its claims of impact. Simply asserting enormous benefits is no longer enough, if it ever was.

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Holden Karnofsky.