In recent years, millions of words have been written about the need to measure the effectiveness of nonprofit social programs, and millions of dollars have been spent doing just that. It’s time to ask: What has been the impact of this effectiveness movement?
While nonprofit leaders have become more thoughtful and skilled about community outcomes, in too many cases, funders – foundations, government agencies, and donors – get lost in the labyrinth searching for effectiveness.
The Way Stations
Developing measurements are invaluable exercises in thinking about how nonprofits and programs can improve, but it’s easy to stumble if we mistake any of these way stations as the destination. To stop at one place on the road of outcomes implicitly equates effectiveness with its measure. At a job training center where I was finance director we found ourselves drawn to teaching to the test – focusing more on interview skills and resume writing than on long-term employability. An incentive to enroll only the most employable individuals – and thereby increase our documented effectiveness – hung over every interview with every candidate.
The tendency to use outcome verses process measures, the inability to define appropriate time frames for outcomes, and the sheer number of possible measures are just some reasons why measuring effectiveness can sometimes obscure the real objective.
Who Sets the Goals?
Yet another element of the effectiveness trap is the tendency to view effectiveness as an objective and neutral goal. On the contrary, effectiveness cannot be disentangled from who is asking, from what will be done with the information gathered, or from the values that separately underlie the program and the evaluation. For instance, 20 years ago agencies serving runaway teens focused less on process objectives – such as how many beds were filled each night – and more on outcomes, specifically family reunification. Social workers at these agencies knew that while reunification was an ideal outcome for some teens, it also meant returning others to abusive homes. Then 10 years ago, it was recognized that measuring effectiveness simply by family reunification was inappropriate. What turned around this thinking was not a series of studies and evaluations, but a change from a Republican to a Democratic president in 1993. Depending on whom you ask – its funder, volunteers, staff, clients, or local community – about a nonprofit organization’s goals and purposes, you may get a different answer.
Proportionality
Because private foundations in particular have few dollars with which to address large issues, they understandably seek opportunities for small investments that will have large impacts. This legitimate yearning often leads to trying to get results that are out of proportion to the investment. A large, multi-service agency with a pregnancy prevention program that we consulted, for instance, wanted to spend $8,000 to send two staff members to visit other wellregarded programs. “How many fewer teenagers will get pregnant as a result of this trip?” their foundation officer wondered. The fact is, you can’t expect to see visible impact with such a small intervention. We frequently see people who want to see results in public education with $500,000 or people who want to see change in a kid’s life with one hour of tutoring per week.
Meanwhile, we have seen large amounts of funding spent on evaluations that could be better spent on needed programs. One California foundation recently spent $5 million on a multiyear evaluation by a prestigious research firm. The answer that came back: We can’t really tell the degree to which the initiative was effective. These evaluations aren’t that helpful to the nonprofit being evaluated, either. Most nonprofit managers find the evaluation process an eye-opening and learning experience, but the conclusions themselves unsurprising. Given the tentativeness and muddiness of even the best evaluations, would it perhaps have been better to spend $1 million on the evaluation, and $4 million more on the program?
Long-Term and Intangible Impacts
Effectiveness is difficult to measure partly because social causes and effects are difficult to measure, but also because the long-term and broad goals of an organization are hard for any one funder to commit to. For example, the Asian & Pacific Islander Wellness Center in San Francisco is supported financially by health funders who concentrate on the center’s primary purpose of providing medical care to people with HIV. The funders, however, overlook the fact that the center represents Asian Americans on a variety of issues. “I’ll defend our programs by any measure you want,” says executive director John Manzon-Santos, “but to understand our organization’s effectiveness, you have to understand everything else we are. At City Hall we speak out for Asian communities, not just about HIV, but also about health care, neighborhood safety, and zoning. Within Asian communities we advocate for sexual diversity and inclusiveness. But these aspects are ignored by funders and evaluators.” Through this broader and less measurable voice in the community, the center works to decrease high-risk sexual behavior, but its funder doesn’t necessarily measure this role.
The New, Untested, and Unfunded
Foundation giving – the venture capital of the nonprofit sector – has become more conservative, more biased against new, innovative, and untested ideas and programs. Foundation boards and government leaders are insisting on funding “things demonstrated to have worked,” and shying away from seeking outcomes that are longer-term, more ambitious, and more intangible. “My board wants to see things with tangible results next year,” one foundation officer told me. “They’ve spent the last couple of years seeing evaluation reports and getting frustrated by them. ‘Back to basics’ is the word.”
Springing Effectiveness from Its Trap
This isn’t a tirade against effectiveness. In fact, as a result of the effectiveness movement, nonprofit staff and volunteers are thinking more deeply about goals and measurement, and this journey is surely one that is worthwhile. But these positive results shouldn’t be fenced in by too narrow a view of effectiveness.
In some cases, it is appropriate for public and private funders to insist on fairly predictable results, but rather than allow ourselves to be confined by these measures, we should see these measures as limited tools that lead us to understand individual, societal, and ecological change in all its complexity and depth. There are broader measures of results – “formative evaluations,” for instance – that go beyond ranking effectiveness on a one to six scale. Formative evaluations are process-oriented, structured around providing feedback on a program’s design or implementation to enable improvement. Evaluators first listen closely to people on the front lines and then help to identify and build feedback loops into the existing system to allow continuous learning. Clear programmatic indicators that don’t require a Ph.D. to conduct and interpret provide continuous measurement.
With a broader view of effectiveness as a process as well as an outcome, we can pursue ambitious goals that take years to evolve. We can acknowledge failures as well as successes and still be free to experiment again. We can, in short, be a sector that is not only more effective in programming, but also one that aspires to change the world – and is determined to do so.
JAN MASAOKA is executive director of CompassPoint Nonprofit Services, a nonprofit management consulting and training organization in San Francisco and San Jose. You can reach her at [email protected]
Read more stories by Jan Masaoka.
