The Value of Strategic Planning & Evaluation
The Value of Strategic Planning & Evaluation
In this ongoing series of essays, practitioners, consultants, and academics explore the value of strategy and evaluation, as well as the limits and downsides of these practices.

I made several presentations earlier this month to the avid readers gathered at the Tucson book festival, the nation’s fourth largest. As the co-author of a recently published book on how to make smart philanthropic grants, I expected the audience to challenge the relentlessly quantitative methodology I was peddling. Instead, the audience focused on three questions. Each left me dyspeptic:

  1. Do successful grants generally translate to successful public policy?
  2. Do unsuccessful public policies generally translate to unsuccessful grants?
  3. Is Robin Hood doomed to failure, given the impossibility of conquering a formidable foe like poverty?

Why dyspeptic? I heard the questions as rhetorical—the questioners presumed that the answer to all three questions was yes. But it’s not. Allowed to fester, these questions threaten to drive funders into ineffective corners.

The first question came in this form: What public policies would I advocate based on successful grants I’ve overseen at Robin Hood—a charity that raises and spends about $150 million a year to fight poverty in New York City. The presumption: Successful grants imply successful public policies. But funders who make any such presumptions range far beyond their professional headlights.

As a simple thought experiment, assume that XYZ, a funder, pays a middle school to cut the size of its math classes in half. Assume student math skills soar. Put aside issues of statistical reliability. Should XYZ lobby for city and state governments to cut math classes in half?

Personally, I’d draw and quarter any program officer at Robin Hood who would leap to such a conclusion. (Perhaps that’s a tad harsh.) But let’s note the obvious misstep. Cutting the size of a math class requires little more than hiring another teacher or two. But if an entire school district, city, or state were to follow suit, where would it find all the new teachers—and how far would it need to lower its hiring standards to fill the open faculty slots? Let me put it this way: Would you rather your child sit in a classroom of 30 with the best math teacher in the school or in a classroom of only 15 students with a newly hired novice? Few grantmakers confront the macro implications of their decisions. As grant makers, they don’t need to. But until they take the next step—until they marshal evidence to capture macro impacts—grantmakers best confine their opinions about public policy to the family breakfast table.

The second question highlights a different (though related) presumption: Interventions that have failed as public policy serve as a poor basis for grantmakers. Perhaps. But there are plenty of individual cases for which that presumption doesn’t hold.

Take the case of large, publicly funded job-training programs. Academic literature does not give high marks to programs that operate outside an actual workplace. The workers in randomly selected job-training programs don’t earn much more than similar workers who don’t get training.

We know that literature at Robin Hood, yet we invest tens of millions of dollars each year in job training. Applying as careful a set of statistical controls as feasible (we have yet to run randomized controlled trials for job-training programs) we deem several of our job-training programs successful. Why might our local experience run counter to national studies? The simple point is that we don’t pay our program officers to pick job-training programs at random. We pay them to pick programs that operate in the right-hand tail of the proverbial bell curve (with programs sorted by quality). Large-scale public policies cannot be that discriminating. Funders ought to be. The failure of an intervention as public policy surely does raise suspicion, but smart funders marshal evidence, not suspicions.

That brings us to the third question: scale. Are funders like Robin Hood wasting donors’ money? Is a social malady like poverty too big a dragon to slay?

Repeat after me: Rate of return. Rate of return. Rate of return.

Funders need not solve a social malady to put their donors’ money to successful purpose. An obsessive search for “game changers” can wind up as a fruitless search for the unlikely. Needy families might well be better served by less flamboyant grant-making.

Take the problem of obscenely high dropout rates from community colleges. Nationwide, about 70 percent of students entering community college are required to take remedial courses before they can enroll in college-level courses; fewer than 30 percent ever graduate. In New York City, 80 percent of incoming students enroll in remediation and only 15 percent of the 80 percent graduate within 3 years. (See statistics.) The problem is sufficiently severe that Robin Hood has created a $5 million prize in search of, yes, a game changer: computer software that will at least double the number of students who earn associate’s degrees.

But funders don’t need a game changer to do a powerful amount of good. Students who earn their associate’s degree earn, on average, an extra $8,000-$10,000 a year. For dropouts who might otherwise have earned $25,000 a year, an extra $10,000 is surely transformative. That boost provides plenty of return on philanthropic dollars even without wholesale transformation of the nation’s low-wage labor market.

You can temporarily treat dyspepsia with Tums. But a permanent cure for my case lies in philanthropic practices that have grantmakers following evidence rather than presumptions and recognizing that they need not transform everything to powerfully transform enough things.