Although evidence-based funding and policymaking are gaining momentum in philanthropy and government, a study by the Center for Effective Philanthropy suggests that we still have a long way to go. Seventy-one percent of the nonprofit leaders surveyed reported that their funders provided no support for program assessment or evaluation. Unless we invest more resources in our capacity to measure the performance and evaluate the impact of programs, how can funders and nonprofits be confident that they are making a real difference in people’s lives? And how can we expect to improve programs’ performance and increase their impact without building and examining the evidence of what works and what doesn’t?
I understand the reluctance of many fellow funders and practitioners to undertake rigorous evaluation. It can be costly, diverting resources from the direct delivery of desperately needed services. It can be scary—by their very nature, randomized control trials frequently show smaller impacts than less rigorously designed evaluations. Many people worry, “What if the findings suggest that the program I’m supporting or running is not as effective as I believe it is?”
These fears are justifiable, but we at the Edna McConnell Clark Foundation (EMCF) believe that a broader perspective on the purposes of evaluation can help allay them. We, along with more and more funders and nonprofits, believe the point of evaluation is not just to earn a “gold star” or pass-fail grade. It is also to learn and improve. From this perspective, evaluation is a dynamic and ongoing process that never ends.
As the architect of EMCF’s approach to evidence building, David Hunter likes to say that performance management takes “performance leadership.” Dr. David Olds exemplifies this. Back in the 1970s, he conceived the home-visitation program for low-income mothers that eventually became Nurse-Family Partnership (NFP) and is still using what he learned from evaluating it after all these years. By continually measuring and assessing how well the program works, Dr. Olds and his colleagues help make it work even better. When data revealed that nurses at replication sites spent less time with mothers than in the original trials, for example, NFP developed new observational and training tools to aid both novice and experienced NFP nurses.
Center for Employment Opportunities (CEO), another EMCF grantee, exemplifies how evaluation can create opportunities for cutting-edge program development. A randomized control trial found that the center reduced recidivism among people recently released from prison by 16 to 22 percent. This impressive evidence helped the organization expand beyond its base in New York to Oklahoma and California. But the three-year study did not find that CEO’s program made a discernible difference in participants’ long-term employment, compared to a control group of recently incarcerated people who did not enter the program. In other words, CEO had demonstrated its success in helping people stay out of prison but not necessarily in helping them land lasting jobs. So it went back to the drawing board and beefed up the job placement and retention component of its program. Another randomized control trial will determine whether these improvements are making an impact on employment.
What’s more, the performance management and measurement systems that CEO put in place to assess and improve its performance, and the evidence these systems generated, helped it win the world’s largest social impact bond (SIB) to date: $12 million over five years to expand in New York State. If CEO can reduce recidivism among participants in its program by at least 8 percent and/or increase employment by at least 5 percent, as validated by a randomized control trial, private and institutional investors could realize returns as high as 12 percent annually and taxpayers will save $7.8 million. Though it’s too soon to tell whether innovative funding vehicles such as SIBs will prove successful, they are driven by evaluation.
With our support, most of EMCF’s 21 current grantees are undergoing or about to launch external evaluations. We expect that some of the results will be heartening, while others (to our chagrin) will be mixed or even disappointing. As Mary Kay Gugerty and Dean Karlan write, rigorous evaluations may not be appropriate for all situations or organizations. But when they are, the findings will help nonprofits better understand their programs and improve them to better serve the disadvantaged.