For more than a decade, many philanthropy leaders and advisors have encouraged foundations to go beyond using evaluation as just a judgmental report card. So we would expect more funders nowadays to treat it as a tool for learning and not only accountability—right? Yet through our work with Grantmakers for Effective Organizations (GEO), we discovered otherwise.
GEO’s 2011 national study of grantmaker practices—which my firm, TCC Group, conducted and I helped manage—found that approximately 70 percent of grantmakers surveyed said that they evaluate their funded work. Yet, compared to similar data collected in 2008, funders continued to place greater emphasis on using evaluation for accountability, and fewer funders were focused on learning. Specifically, more than 77 percent of the foundations that conducted evaluations considered the documentation of implemented grant-supported activities and achievement of outcomes “very important”; a much smaller share of respondents gave the same level of importance to strengthening future grantmaking (62 percent in 2011 versus 73 percent in 2008), contributing to knowledge in the field (34 percent in 2011, 38 percent in 2008), and strengthening organizational practices in the field (32 percent in 2011, 32 percent in 2008). (Read SSIR’s three-part blog series by GEO’s J Cray for more details about the national study.)
Both accountability and learning are important reasons for evaluation, and this is not an either/or situation. However, we believe that focusing on learning enhances and strengthens a foundation’s evaluation efforts. Grantmakers who prioritize learning are more likely to support nonprofits’ learning practices, and learning is a key predictor of organizational sustainability and growth. When nonprofits engage in evaluative learning, funders better understand what works, are more able to support grantees, and can make more strategic use of their grantmaking resources.
How are funders that approach evaluation through a learning lens different from those that view evaluation mainly through the lens of accountability? We examined the data collected for GEO’s field study to explore these differences in grantmaking practices. Based on their responses to a set of questions on evaluation priorities, we divided the funders into two types: funders with a learning lens placed equal or greater emphasis on learning and improvement than accountability, and funders with an accountability lens placed greater emphasis on accountability. Our further analysis revealed that grantmakers who emphasize learning:
1. Share and use evaluation findings with both internal and external audiences. They are more likely to use evaluation data to plan and refine programs or strategies, influence public policy or government funding choices, and share their findings with grantees, stakeholders, and/or other grantmakers.
2. Put a premium on listening to grantees. Grantmakers’ learning lens correlated with their practices around stakeholder engagement. They are more likely than accountability-oriented funders to solicit feedback from grantees and engage external voices in decision-making and strategy-setting.
3. Invest in building organizational capacity of grantees. Foundations support capacity building to increase organizational and community sustainability, enhance program impact, and leverage their investment. However, unlike investments in projects, which tend to have specific outcomes, support of organizational capacity building often does not result in the kind of tangible successes that can be clearly identified and credited. We discovered that grantmakers with a learning lens are willing to accept this uncertainty and invest in nonprofit infrastructure to help grantees achieve their missions more effectively.
4. Support scaling. Foundations with a learning lens are more likely to support nonprofits’ efforts to replicate and adapt effective programs or to invest in the expansion of new ideas and innovation. It is not a surprise that learning and scaling impact go hand in hand. For foundations that are interested in helping nonprofits achieve greater impact, the key question to address is not “whether” an intervention worked or not, but “what worked,” “under what conditions,” and “what can be improved.”
5. Have a designated staff person or department leading their evaluation activities. While funders with a learning lens did not differ from accountability-oriented grantmakers in terms of type or asset size, their evaluation functions are structured differently. It is possible that since learning-oriented philanthropies have an individual or department assigned to “own” evaluation, they are more like to use evaluation data beyond demonstrating proof of impact. When evaluation responsibility is dispersed among foundation leaders or program officers, it can be seen as an additional task in their busy schedule and is more likely to be moved to the back burner.
For grantmakers that are interested in strengthening their evaluative learning efforts, cultivating some of these behaviors is a good place to start.