In 1974, the Ford Foundation and six federal agencies funded the Supported Work Program in 15 locations across the United States. The program aimed to help ex-addicts, ex-offenders, school dropouts, and long-term welfare recipients become self-sufficient by giving them 12 to 18 months of highly structured, closely supervised paid work experience. Its developers used the best ideas and evidence available to design the program, and the best techniques to evaluate it.

Yet when the results came in, researchers found that the program helped only one of the four groups – the long-term welfare recipients – earn more money and rely less on public benefits. In response, much of the press reported that the program failed on three out of four counts. But in fact, the program was a four-out-of-four win: Not only did it show what did work for the welfare recipients, it also showed what didn’t work for the other three groups. Policymakers then used these findings – both the positives and the negatives – to craft future programs.

Likewise, if this country is to make progress on tough problems, nonprofits, governments, and businesses alike must not only identify and promote policies and programs that work, they must also shed those that do not. Since foundations are our social risk capital, a big part of what they should be doing is testing bold ideas and taking risks. This means courting failure. When foundations play it safe, they are not doing their job.

But there are productive and unproductive ways to fail. After 35 years designing and conducting large-scale evaluations of a wide variety of social programs, I’ve identified two bad ways to fail and one good way. Most foundations have too many of the wrong kinds of failures, and not enough of the right kind.

Type 1: Naive Failure. This is when a foundation dreams up or supports an idea that is naive in either concept or implementation. As a result, the program fails in its management, operations, or outcomes. The pain comes because this result was often obvious from the beginning. This situation is a clear failure. A foundation should strive not to make such grants.

Type II: Missed-Opportunity Failure. This type of failure arises when a foundation funds a program that theory, experience, or research suggests is a good bet, but then fails to put in place a reliable mechanism to determine whether it works. Here too the result is predictable. At the end, the funder and others are left with question marks, instead of guidance, on whether to expand, fold, or redesign the program. Grants of this type are missed opportunities that usually leave no lasting legacy.

Type III: Useful Failure. This is when a foundation funds a program that theory, experience, or research suggests is promising, and evaluates it using reliable methods, but then discovers that the program didn’t have its desired outcome. This grant was not a wasted investment. Quite to the contrary, the foundation took a reasonable risk and gathered the evidence that can help it and others design stronger programs in the future. Although the program was a failure, the grant was a success.

If foundations should be experimenters and risk takers, they should be embarrassed about type I and type II failures, but proud of type III failures. Of course, they should be equally – not more, but equally – delighted by those grants that produce evidence of clear and resounding success. But if foundations are doing their job, successful programs should be rare.

Failure is not learning that something does not work, but not bothering to learn whether it works. We need to make that distinction and be proud of the former, since it is a critical building block to progress. If foundations do not make this distinction, they risk becoming overconservative as their staff take the heat for alleged failures that were actually wise investments.


JUDITH M. GUERON is the former president of MDRC, a social policy research organization based in New York City. She is currently an independent scholar in residence at MDRC and is writing a book about the use of randomized control trials and mixed-method studies in the evaluation of employment and welfare programs.

Read more stories by Judith M. Gueron.