SPONSORED SUPPLEMENT TO SSIR

Advancing Evaluation Practices in Philanthropy

This special supplement includes six articles that address basic principles and practices that inform efforts to monitor performance, track progress, and assess the impact of foundation strategies, initiatives, and grants. The supplement was sponsored by the Aspen Institute Program on Philanthropy and Social Innovation and underwritten with a grant from the Ford Foundation.

In recent years, the philanthropic sector has neared consensus on the need to improve measurement and evaluation of its work. Although the philanthropies they lead use different methods, members of the Aspen Philanthropy Group (APG) have agreed that basic principles and practices can inform efforts to monitor performance, track progress, and assess the impact of foundation strategies, initiatives, and grants. They hope to build a culture of learning in the process.

Over the past two years these CEOs of private, corporate, and community foundations have supported a series of meetings on measurement and evaluation (M&E) with leaders of grantee organizations, issue experts, and evaluators. They have concluded that, when done right, assessment can achieve three goals. It can strengthen grantor and grantee decision-making, enable continuous learning and improvement, and contribute to field-wide learning. Below are broad observations from the workshop process, followed by articles from five APG authors describing the M&E philosophies of the institutions they lead. Their articles will be among those to appear in an edited e-volume, to be published by the Aspen Institute and continuously updated to capture evolving foundation practice and comments from voices in the field. This is what we learned.

Definitions Matter | APG members found that differing terminology can undermine efforts by grantors and grantees to collaborate effectively in the design and implementation of an M&E system. Many grantors and grantees use the terms “evaluation,” “impact measurement,” and “measurement and evaluation” interchangeably. In fact, M&E encompasses distinct activities with distinct purposes, methods, and levels of difficulty. In his article, William and Flora Hewlett Foundation president Paul Brest separates M&E into three categories undertaken at three stages: theories of change described and logic models devised during the initial design of a project or foundation initiative; tracking progress against the strategy set during the life of the grant or initiative; and assessing impact after the fact. The first of these is essential background for M&E, and the three together provide a useful means of organizing the various activities and purposes of M&E. The second enables a grantor and grantee to gain the information needed to make mid-course corrections to the strategy and learn throughout the process. The third activity—assessing impact—is the most daunting. Brest notes that in some undertakings, such as policy advocacy or Track II diplomacy, exogenous influences make it hard if not impossible to attribute impact to any one actor or strategy. He argues for demonstrating “contribution” rather than claiming “attribution,” where contribution means increasing the likelihood of success, and notes that the true impact of such “risky” grants may not be possible to ascertain. Nonetheless, they are well worth pursuing.

Purpose Matters | At its best, M&E informs decision-making and provides for continuous learning. In his article, Matthew Bannick, managing director of Omidyar Network (ON), discusses why M&E is more likely to be used—and used to good effect—when it is designed collaboratively by grantor and grantee, and when data are gathered and organized around decisions that each needs to take. It is therefore critically important that they agree on their evaluation approach at the outset. Ford Foundation president Luis Ubiñas agrees, adding that “from the very beginning, grantees should have a clear sense of what benchmarks of success are expected of them at each stage of initiative development”—and why.

The Cost-Benefit Ratio Matters | Ubiñas points out the costs of M&E, arguing that in designing an evaluation system, careful consideration must be given to the burdens on each party. Failure to do so, he writes, can lead to “excessive data gathering” in which grantor and grantee gather as much data as possible in search of evidence of impact. The costs are fourfold. “First, it is a burden to grantees, creating surplus work for often tightly staffed and financially strapped nonprofits. Second, it undermines quality because grantees will provide the requested information to meet their grant obligations, but may not have time to supply the insight that is often more valuable than the data. Third, it inundates foundation staff with information but may leave them little time to use it effectively. Fourth, it may not provide the information that is actually needed to understand how effective our initiatives and grantmaking are.” According to Bannick, ON reduces the burden by using a limited number of easily collected metrics, and, as an alternative to the “time-consuming, costly, and complicated challenge of measuring impact,” ON often measures outputs as proxies. As for the price tag, Rockefeller Foundation president Judith Rodin notes the efficiencies gained by using technology to gather real time data. Brest notes (in Money Well Spent, coauthored with Hal Harvey) that “if you are a philanthropist with a long-term commitment to a field, it is well worth putting your funds—and lots of them—into evaluation.”

Culture, Context, and Capacity Matter | M&E requires a commitment to building capacity within foundations, grantee organizations, and the field of evaluation in general. The Rockefeller Foundation invests in M&E teams in both the developed and developing world to monitor foundation initiatives and to act as “critical friends” to its grantees, establishing monitoring and learning systems where none existed. The goal is to facilitate learning among grantees and within the foundation aimed at improving performance all round. But most important, it is to leave behind greater capacity among local M&E professionals. Rodin reports that the foundation has supported regional institutions that train and mentor local evaluators and partner with similar institutions elsewhere, with the goal of building lasting capacity. Bannick speaks to the importance of providing technical assistance to grantees. And, within a foundation, James Irvine Foundation president James Canales notes, it is important for there to be leadership by trustees and senior officers, as well as a readiness to devote time, dollars, and expertise to assessing the philanthropy’s strategy, initiatives, and grants. Doing so “mandates full institutional commitment and cannot be the province of just the evaluation director.” Beyond these important tangible contributions lies the requirement for building what Ubiñas calls an “impact culture” in which continuous learning and adaptation are enabled, required, and rewarded. “Goals, theories of change, and operating approaches are all necessarily imperfect; only by learning from our successes and our mistakes can we build an impact culture,” he writes.

The Unit of Analysis Matters | The good—and the bad—news is that almost any activity can be evaluated. It is important to sort out the different units of analysis, as is done in the Bill and Melinda Gates Foundation’s Guide to Actionable Measurement, which notes three distinct areas of focus. At the level of foundation strategy, the focus should be on measuring outcomes over impact (as Bannick describes), on assessing contribution rather than attribution (as Brest recommends for certain grants), and on the degree of harmony that can be achieved among grantees pursuing a given strategy. At the level of foundation initiative, the foundation should use grantee reporting data on outputs and outcomes to signal whether the initiative is making progress; track the program team’s activities other than grants (such as convening and public speaking); use independent evaluation; and capture both intended and unintended consequences of the initiative. And at the level of the individual grant, the foundation should align expected grant results with strategic intent; work with the grantee to track grantee inputs, activities, outputs and outcomes at critical points to manage and adjust each grant appropriately; and measure the foundation’s input of human, financial and technical resources.

Timing Matters | Just as the units of analysis differ, so too do the time horizons required to measure and evaluate the performance of a foundation’s strategy and grants. Many short and medium-term metrics are useful in assessing how well an organization is managing processes or reaching target populations. Longer-term longitudinal studies are critical to gauging the impact of a program and to establishing the causal relationship between intervention and desired outcome. Such rigorous, long-term studies can be particularly useful for those seeking to scale up innovations. Canales notes, however, that the annual grant-cycle poses a “structural barrier” to the longer-term undertaking of evaluating and learning, noting that “program goals and aspirations rarely follow annual timelines, nor should they…if they are sufficiently ambitious.” He points to the importance of “creating the space for consideration of broader progress assessment.”

Feedback from Grantees and Beneficiaries Matters | M&E must incorporate into the process the viewpoints and observations of the funder, grantee, and ultimate beneficiary through all stages of work—identifying problems, co-creating solutions, and implementing with a shared vision of outcomes. Under the leadership of APG member Carol Larson, the David and Lucile Packard Foundation solicits feedback from the on-the-ground staff of the grantee and has established written standards to help its program team to communicate with grantees. Grantees, in turn, can better assess community needs—and their performance in helping to meet those needs—when program beneficiaries provide them quantitative and qualitative feedback. At Ford, Ubiñas notes that the foundation selects grantees “managed by those living and working closest to where targeted populations are located.” Noting the Rockefeller Foundation’s commitment to evaluation practices that include stakeholders’ voices, Rodin cites the consensus of the Africa Evaluation Association: “only when the voices of those whose lives we seek to improve are heard, respected and internalized in our understanding of the problems we seek to solve” will philanthropy achieve its purpose.

Transparency Matters | Although the goals of M&E are to inform decision-making and enable continuous learning by those immediately involved, there is a larger community to serve and a larger purpose to pursue. By publicly sharing the data gathered and conclusions reached, grantors and grantees can contribute to field-wide learning. APG members agree that this is an opportunity to seize. The philanthropic sector has helped build communities of practice that generate knowledge. These evaluators form professional associations that set standards for the field. Independent organizations assess foundation and grantee performance and publish their findings. Donor organizations and networks transfer knowledge among philanthropies, and between grant-making institutions and individuals. And academic programs provide the intellectual underpinning for much of this work. Supported by philanthropy, these and other institutions provide some of the early hardware for wider impact. Moreover, the gradual evolution of principles that guide and practices that enable rigorous evaluation can contribute to its software.

But has a true system for philanthropic impact been designed and widely adopted? Perhaps not. And so, in publishing an e-volume and opening a conversation, Aspen’s Program on Philanthropy and Social Innovation will seek the wisdom of the crowd and ask the questions: What might the components of such a system be? Where will the breakthroughs occur? What sort of venture capital will be needed to finance the prototypes? And what markets will bring these innovations to scale? Or, building upon Ubiñas’s apt phrase, under what conditions might an “impact culture” spread? What would it take for its language to be adopted, its standards embraced, its methods refined, and its potential realized? And if that culture is to be global, dynamic, and enduring, how might it be informed and advanced by the new cadre of evaluators to which Rodin refers?

None of the architects or beneficiaries of modern-day philanthropy would claim that they alone can create such a world, but they may well agree that it is one worth imagining. Doing so together would make for a powerful beginning.

See the complete evaluation supplement.

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Jane Wales.