Measurement & Evaluation

Measuring What Matters

Five grant performance measurement traps and how to avoid them.

These days, foundations, evaluators, and consultants are spending a lot of time advocating new ways to measure progress and deal with “emergence” when working on complex, systems-change initiatives. At the same time, sitting underneath any systems change effort is the day-to-day craft of grantmaking. The very real need to determine how best to allocate foundations’ limited resources requires generating robust performance measures that drive accountability, learning, and impact—for each and every grant. 

Performance measures—statements of output or outcome indicators established contractually between funder and nonprofit grantees—should reflect a shared understanding between the grantee, the funder, and the evaluator of what will constitute success. They set everyone up for objective assessment of progress and lessons learned, both along the way and at the end. But it’s important to be aware of and manage the perverse incentives a rigorous performance measurement system can create. Building on the observations of Daniel Stid of the Hewlett Foundation, we have found that when program staff and grantees fall into the five performance measurement traps we’ve identified below, it can compromise the learning process. 

The first three traps lead to creating too many meaningless measures, and the last two lead to a reluctance to try and measure what matters most about the work. 

MEANINGLESS MEASURES

1. The Micromanagement Trap

Sometimes well-meaning program staff and grantees want to lay out every step of the work they need to accomplish during the grant period, in minute detail. Planning is essential, but overbuilding performance measures can lead to losing sight of what really matters. For example, our foundation recently gave a capacity-building grant to a nonprofit organization that trains school principals. The nonprofit had grown to the point that it needed to bring its analysis of its trainees’ impact on school performance in-house. The grant’s proposed deliverables included drafting and advertising a chief academic officer job description, hiring the person, and purchasing new data systems. These were all necessary tasks, but none of these measures could indicate whether the new person was functioning effectively. A tighter measure might read: “By year end, the organization will have increased data analysis capacity, as demonstrated by the posting of a report on the academic performance of schools led by trained principals.” 

The number of deliverables should to some extent scale with the size of the grant, but having too many obscures the central purpose of the work. Foundations also need to trust grantees to execute as experts in their domains; it’s not necessary to monitor every single step along the way. 

2. The Hedge Trap

Also common is the inclusion of relatively minor performance measures—related to administrative actions, for example—to hedge against the risk of a bad evaluation. Since it’s virtually guaranteed that the grantee will accomplish these minor measures, they balance against more meaningful measures that the grantee may not meet—or so the thinking goes. For example, we were recently working with a grantee to increase its organizational sustainability. The original proposal included numerous measures for processes involved in identifying potential funders, and developing and submitting funding requests. While we are interested in the grantee’s process, we felt that dollars raised and funding base diversification were more meaningful measures, and adjusted accordingly. 

Performance measures should be achievable, but also ambitious. We expect that grantees will meet some performance measures, and not meet or only partially meet others. Indeed, if they meet all the measures, we may wonder if the goals were simply set too low. Conversely, if they fall short on everything, we may not judge the grant a failure, especially if we learned important lessons for the work moving forward.

3. The At-Least-It’s-Measurable Trap 

“Measurable” and “meaningful” are not the same thing. Often, very easily measurable items are less important than harder-to-measure items. For example, proposed measures related to increasing an organization’s visibility as evidence of its growing influence might be: “Post five times to the organization’s blog, 20 times on Facebook, and 30 times on Twitter about the importance of protecting wildlife habitat and water quality, reaching 2,000, 30,000 and 45,000 people, respectively.” But the importance of these activities and the significance of the targets are unclear. A better deliverable would focus on the intended results of these actions, such as increased organizational membership.

To determine whether performance measures are meaningful, we ask: 

  1. Are the outputs and outcomes tightly linked?
  2. Is it a routine activity that really should not be assessed, or is it an essential product or service worthy of being tracked?
  3. Have we prioritized informative deliverables, for which success or failure may lead us to do something different in the future?
  4. Are the grantee’s outcomes linked to the ultimate impacts that our foundation is seeking? 

RELUCTANCE TO MEASURE WHAT MATTERS MOST

4. The Full-Control Trap

The ultimate goals of our work as funders—whether improved student outcomes or a healthier environment—are usually beyond the direct control of grantees. As a result, grantees can be reluctant to include outcome performance measures related to larger goals. Recently, a grantee working to reform school finance proposed to provide technical assistance to policymakers and generate white papers designed to influence system reform. But the organization was worried that funders would hold it accountable for policy change that resulted from the assistance and recommendations.

Nonprofit organizations cannot fully control many outcomes, including policy improvements. That said, the whole point of providing funding to organizations is to bring their influence to bear on solving difficult problems. It’s not sufficient merely to commit to trying hard. Funders and grantees should feel accountable for their results in the end, and they shouldn’t shy away from bold ideas.

5. The Complexity-Cannot-Be-Measured-Objectively Trap

Many grantees engage in advocacy, and some are piloting new systems-change approaches. These efforts often require shifts in strategies and tactics mid-course, and their complexity and relative unpredictability can trigger reluctance to establish meaningful performance measures. But in our experience, it’s possible to readily adapt rigorous performance measurement to this work. We would argue that it’s essential to plan and assess progress against clearly stated measures, even for work on complex problems. For example, we can measure awareness of and support for policy reform, or grantees’ access to policymakers and opinion leaders. And assessing whether public will is shifting or if a preferred policy solution is gaining prominence on the policy agenda can indicate whether our investments and partnerships are making a difference. It’s also worth noting that establishing rigorous performance measures doesn’t lock grantees into work that no longer makes sense when circumstances change; funders and grantees can work together to amend performance measures during the course of a grant.

There’s sometimes a related concern that it’s not useful to measure small changes in complex systems, because systems-level change requires a long-term effort by many groups whose contributions we can’t disentangle. In some ways, these arguments are valid, and to address these concerns, we have started to apply coalition planning and collective impact assessment tools, in addition to performance measures, to our evaluations. We have not, however, abandoned the useful, objective measurement of work to reform complex systems. For example, in our advocacy investments related to the Gulf of Mexico environmental restoration efforts, we are assessing a network of grantees that have coordinated their efforts on shared goals, but each group still also has unique responsibilities articulated in its own performance measures.  

In conclusion, while the rationales leading to these five performance measurement traps are understandable, they can trap us in responding to the wrong incentives. Good performance measurement can be uncomfortable, but it helps both grantees and funders learn what works and what doesn’t, and ideally gain some insight into why.       

Tracker Pixel for Entry
 
 

COMMENTS

  • BY Unmesh Sheth

    ON January 4, 2016 01:30 PM

    What a great start of 2016 with a subject close to my heart. Very well written article indeed.

    I would like to share (and add) more to foundation perspective. First let me share what we are learning from many of the leading foundations. We feel that foundations with the help of shifting technology can do a lot more – and expand a definition of “Measure What Matters” at every level – not just metrics—but also bring innovation to grantee data collection, portfolio based collective metrics aggregation and collaborative sharing of main outcomes from a better data analytics. I would agree with complexity-Cannot-Be-Measured-Objectively, but a well-designed shared measurement can offer much deeper insight that allows deeper correlation to find cause and effect can be tremendously useful. However, we believe that foundation must look through complete lifecycle of impact measurement.

    Flexible & context building metrics setup (hierarchical scope – overall, program level, and Individual),

    Better portfolio approach so that you can get meaningful understanding into impact

    Better context sensitive partner metrics data/outcome collection, build a deeper impact evidence system and continuous feedback with grantees.

    Needless to say that innovation is needed at every level.  If you agree, with this, please read on my perspective. (sorry, longer explanation to give a complete picture)

    Foundations are shifting their philanthropic strategies to a more targeted approach. As an example, we have the recent change at Ford Foundation and the new generation of funders, such as Robin Hood Foundation in New York and Tipping Point in San Francisco. These organizations are now adopting outcome and impact measurement to ensure the success of the programs they fund. Many foundations are reducing the grant portfolio size and focusing on fewer grants but with larger unrestricted funding. The overall result of this behavior is a higher engagement of all involved parts. Leading funding and program development organizations increasingly believe that a structural change in the social sector will happen when similar players start working collaboratively. Overall, we would like foundations to look at role (in term of measure what matters) in following ways:

    • Foundations supporting collective impact initiatives
    • Foundations requiring grantee metrics collaboration, and reporting
    • Community foundations working with managed program coordination
    • Grantee program collaboration (10 or more/year)
    Foundation should think about their role of outcome data way beyond metrics, and I am happy to share this. But, since our subject is about metrics, I would like to share few best practices:

    SHARED MEASUREMENT
    If you are supporting collective impact initiative shared measurement is key. Provide a deeper inside into community grants and international grants. It is critical to get a systematic collection of data from hundreds of partners including the outcome of a specific grant.

    MAKE EVIDENCE-BASED DECISION
    • Collect dis-aggregate metrics data by common goals, activities and individual goals
    • Share metrics data with partners and funding agencies promptly to enable continuous improvement in the program activities, outcomes.

    IMPROVE COLLECTIVE IMPACT

    Deep dive into best practices and provide insights into programs with similar goals. Identify best metrics and publish/reuse within a network and outside a network.

    MAXIMIZE RESOURCE FOR A BETTER SUSTAINABLE OUTCOME
    • Align long-term financial and other inputs by avoiding duplication and improvement already proven by other partners
    • Build long-term funding opportunities
    • Learn key policy-related outcomes

    For disclosure: I lead SoPact (http://sopact.com), platform that aims to transform foundations/impact funds - impact measurement & sustainability through standards-based reporting & flexible measurement system that matters most to them. Our unique approach allows both collective impact, collaborative approach, and aggregation of the project, grantee, and investee impact within a short time.

  • BY Unmesh Sheth

    ON January 4, 2016 01:55 PM

    In addition to above perspective, I would like to share two unique perspectives on
    a) Metrics Data (Data Collection challenges in a big Foundation) Collection and
    b) Simplifying grantees beneficiary outcome data and program data.
    Metrics data collection solution is described here extensively: http://www.sopact.com/customers/foundation/data-collection-challenges-in-a-big-foundation

    The issue of beneficiary outcome and program data requires is part of my future article.  But here is a quick summary.  There has been a revolution in cloud-based applications during the last 2-3 years,  enabling the social sector with hundreds of service-oriented solutions available at significantly lower cost compared to traditional IT applications. This trend primarily benefits resource-constrained organizations, such as small-mid size nonprofits and social enterprises.  These tools also have a significant advantage over an MS-Excel, which has great limitations in data tracking and data management with the expected level of accuracy and audibility.  I cannot publish details as my article is not published, but you are welcome to get in touch through LinkedIn (https://www.linkedin.com/in/unmeshsheth) with me.

     

  • Julia Coffman's avatar

    BY Julia Coffman

    ON January 4, 2016 04:17 PM

    As an admitted detractor of using performance measurement for complex and emergent strategies, I appreciated this article very much. My main complaint in the past has been about using performance measures as the sole or primary form of evaluative data for these kinds of strategies, where regular learning about why certain targets are or are not being met and using data to inform what should come next is so critical. That is still my complaint. So I very much appreciated you saying that while performance measurement is a cornerstone of your approach, you use other evaluation approaches as well, and that you try to apply a strong learning orientation to this work. Those points are important. Thank you again for this work and for broadening my perspective.

  • BY Don S.Doering

    ON January 6, 2016 05:33 PM

    This is an excellent piece that I’ll share with our the JRS Biodiversity Foundation grantees and it mirrors my own experience in grantmaking and grants management.  If I could add one more trap it would be “The Better Late Than Never Trap.”  Better late than never with measurement may join the last two traps as related to reluctance to measure what matters most.

    This is the trap of designing metrics and evaluation systems that produce results beyond the time period in which there can be a funder or management response to the data. In pursuit of ‘perfect’ measures, project directors miss the quick and timely ‘good’ measures that might have been actionable during the funding award window.  When one overlays the cycles of organizational change, leadership change, donor strategy change and policy change, there can often be a pretty narrow time window during which measures really matter. 
    A mentor once told me that measurement and evaluation answers the question “how will you know?”  To that simple and powerful question of how will a grantee know whether they are on track and successful, I’ve also learned to ask “when will you know?”  Thank you again to Marc, Cheri and Valerie for the nice post as success is often a matter of avoiding failure!

  • BY Mary Kay Keller

    ON March 1, 2016 02:10 PM

    It is possible to measure what is real and meaningful. It’s a different type of measurement. Qualitative Research. As a Social Human Scientist I utilize Qualitative tools to capture meaningful data. I utilize Atlas.ti. a computerized data analysis too.  Atlas-ti is a visual platform for interview data, video, photography, movie, GPS mapping, etc.
      My results are credible, valid and defensible.  Qualitative analysis answers the limits of quantitative analysis.
      analysis.

Leave a Comment

 
 
 
 
 

Please enter the word you see in the image below:

 

SSIR reserves the right to remove comments it deems offensive or inappropriate.