I thoroughly enjoyed the intellectual bar fight between Charity Navigator and fans of GiveWell over a recent SSIR blog post. A few years ago, GiveWell posted a no-holds-barred critique of Charity Navigator, and after suitable time to sharpen the knives, Charity Navigator roared back, slamming GiveWell’s approach to philanthropy as “defective altruism.” A vociferous debate ensued, punctuated by calls for civility, chiding over name-calling, and some smart comments on both sides.
This stuff is both healthy and entertaining, which is a pretty great combo. And you’ve got to admire the Charity Navigator guys’ chutzpah: The last time I was paying much attention, they’d handed Greg Mortenson’s Central Asia Institute a four-star rating right before the Three Cups of Tea scandal broke. They’ve yet to deliver on a 2008 pledge to report on impact, and while there is progress, their utility is limited until impact is central to their ratings. I don’t think they’ve seized quite enough moral high ground to call others’ work “defective.”
Anyway, after reading Charity Navigator’s post, I went back to read GiveWell’s original critique, and then ended up noodling around on their website for a couple of hours (which I highly recommend—there’s a ton of interesting and useful material there). When I got to the “Top Charities” page—the centerpiece of the whole site—I almost sputtered coffee all over my keyboard. The page features a grand total of three organizations: Deworm the World, the Schistosomiasis Control Initiative, and GiveDirectly. After all that research and verbiage—three? Really? And two that do essentially the same thing? We like mass-deworming too, but come on.
The GiveWell website says, “We see ourselves as a ‘finder of great giving opportunities’ rather than a ‘charity evaluator,’" but the truth is that they set up a bar and only three organizations got over it. Given that, it seems to me that either: a) the international poverty sector sucks and is not worth their time, or b) they need to get out more. Given my own experience in the sector, I’d have to go with “b.”
GiveWell does its research in the office. GiveWell staffers—none of whom have a background in international poverty work—have visited a total of two programs in the past two years. The approach appears to be something like this: Find an intervention already supported by a bunch of expensive randomized controlled trials (RCTs). Identify an efficient implementer of that intervention. Recommend. Repeat twice. Done. I don’t know why it took all those smart people so long to come up with three recommendations.
I’d like to see the social sector do a lot more RCTs, and Mulago enthusiastically funds three great members of the RCT mafia: JPAL, IPA, and IDInsight. However, RCTs aren’t always appropriate or doable, and there are a lot of other ways to reach a reasonably confident understanding of impact (or lack thereof). Overall, I’m more interested in ongoing internal impact evaluations that feed quickly back into design and operations than ponderous episodic RCTs, but to trust an organization and its methods, you have to get out there and get to know them well.
A case in point: GiveWell said that Root Capital, which provides technical assistance and loans to businesses that buy from smallholder farmers, didn’t have sufficient evidence of impact for GiveWell to even consider it for top charity status. I went to Uganda a couple of years ago as part of our own due diligence. I saw a cotton ginnery that Root Capital had financed and rebuilt in the area laid waste by the Lord’s Resistance Army. All the producers are smallholders; we know how much cotton they are selling now, and we know how much cotton they were selling before, which was zero.
Some version of this happens in most Root Capital sites, with good-quality numbers indicating that a lot of farmers stabilize and/or increase their incomes. The evidence is strong, but given a heterogeneous international portfolio, it’s hard to package it up neatly. No matter how much impact Root Capital generates, its work doesn’t lend itself to an RCT, and so an important solution to the vexing problem of rural poverty will never make the GiveWell grade.
Another example is One Acre Fund, working with 150,000 very poor farming families in Kenya, Burundi, Rwanda, and Tanzania. One Acre provides farmers with the fertilizer, seeds, training, support, and access to markets they need to make a decent yield from their tiny plots. The farmers, in turn, repay costs from the proceeds of harvests. In most cases, farmers triple their yields and after repayment, double their farm incomes. One Acre measures this by comparing— literally, weighing—the yield from a random sample of present One Acre farmers with the yield of a random sample of would-be One Acre farmers in a similar area where the organization plans to go next. Is it a perfect way to measure? Nope. But if you are in the field, and if you have experience with smallholder farmers, it is utterly clear that something profound has happened. What’s more, it’s a doable, affordable method that the people at One Acre can repeat again and again in the various settings where they work. As funders, it provides a cheap, high-confidence “movie” of what’s going on in four different settings, rather than the expensive one-time, one-setting snapshot afforded by an RCT.
I spend a lot of my own time going on about the failure of the social sector to measure and invest on the basis of impact, so it’s a little weird to find myself criticizing GiveWell. I admire much of GiveWell’s work, and the organization’s insistence on evidence of impact is a service to all of us. It’s just that we—Mulago’s staff, our fellows, and everyone in our portfolio—spend a lot of time and effort neck-deep in the messy, humbling business of measuring real impact in the real world, and GiveWell’s desk-based proclamations can be, well, irritating. Following the RCT trail to find stuff like deworming and cash transfers is easy; to find the impact jackpot, you need to immerse yourself deeply enough in context and methods to make a reasoned judgment. You also have to be a little flexible: Real-world measurement often requires a certain amount of creativity. You can’t just set an impossibly high bar and wait for stuff to show up on your desk. Precision in this business is a mirage, and often a distraction. We want real numbers and real attribution, but we’re happy to take a small hit on accuracy if it we get a convincing picture of real impact that we couldn’t have gotten otherwise.
We all need GiveWell: We need their obsession with impact, big brains, and unwavering honesty and candor. But whether you’re Charity Navigator, GiveWell, or Mulago, you’ve gotta get out there or at least follow the lead of someone who does. When I visited the Central Asia Institute in Pakistan, it was obvious within hours that the operation was a shambles; when I went to see Root Capital in Uganda, reams of data took on new meaning. This isn’t an office-desk business. I’ve made some of my dumbest mistakes because I didn’t go to the field first, and some of our best stuff came to our attention only because we were poking around off the grid and far from home.
So: Shut down your computer, turn off the lights, and go.
Read more stories by Kevin Starr.