Measurement & Evaluation

Raising the Bar on Nonprofit Impact Measurement

The key to progress is embedding measurement in practice.

When you intervene in a complex system, you have difficult choices to make about where and how to act. We may be fans of impact measurement in the social sector, for example, but what if it ends up driving a kind of “marketization” of the sector that pushes charities toward the biggest bang for their buck? Those choices are almost always underdetermined—you can’t know what will happen if you push here instead of pulling there. But if you’re lucky, you’ll be able to see how the system responds over time and refine your strategies accordingly.

Ten years ago, critics dismissed impact measurement as too difficult, misleading, or simply not important. Today, 75 percent of charities measure some or all of their work, and nearly three-quarters have invested more in measuring results over the last five years. A transformation in the tools that enable nonprofits to measure the impact of what they do has raised the bar significantly, but not all have grasped the opportunities offered by this kind of analysis. At NPC, to achieve our vision of a third sector where impact measurement is the norm, we help charities identify what to measure and which tools to use, as well as how to make sense of that data and communicate its value. We’re also the backbone organization in a collective impact program called Inspiring Impact, which aims to embed good impact measurement practices across the UK social sector by 2022.

Over the years, we’ve tried a range of different approaches at the sector level, with varying degrees of success. And we’re not alone: Others across the globe, including the Social Impact Analysts Association, Charting Impact, and the microfinance field’s Social Performance Task Force, are using similar methods to help organizations improve their effectiveness. But is the system responding to these efforts, and does everyone agree with the direction we’re taking? In a recent Stanford Social Innovation Review post, Caroline Fiennes suggested that we couldn’t reasonably expect charities to produce good quality, robust evidence; we should advise them to monitor their activities but leave evaluation to the experts.

Nonprofits measure results within a system

Broadly speaking, we can reduce the system to three sets of actors: funders providing resources, nonprofits delivering services, and beneficiaries receiving products.


Simplistic representation of nonprofit system. (Image courtesy of NPC, 2013)

We know from research carried out in the UK that funder requirements are the primary driver behind an increase in impact measurement among nonprofits—it is cited as more than twice as important as internal leadership. We also know that improved strategy and services are the main benefits that they see as a result; by understanding the impact of their services, they are able to improve what they do.

So herein lies the paradox: Nonprofits measure their results to satisfy funders, but the main reward is a better service, not increased funding.

Stick or carrot?

These findings hint at how we could intervene in the system. Most of us would, I hope, agree that we’re more interested in the carrot (improving services) than the stick (meeting funders’ requirements).

If we want impact measurement to result in improved services and increased impact, then we have to make sure it works for the nonprofit. Only then should we turn to what funders want out of impact measurement.

This is the fundamental principle behind Inspiring Impact—making impact measurement work for nonprofits. This means it’s more a matter of practical knowledge (are we doing better now than last year?) than theoretical proof (can we attribute this change specifically to this intervention?). We’re more interested in performance management and using evidence to improve services, for example, than in randomised control trials.

Inspiring Impact has developed the UK’s first-ever Code of Good Impact Practice for nonprofits, alongside Funder Principles for grantmakers, based on this motivation. Both provide guidance that is accessible and inclusive, aimed at the whole nonprofit sector rather than just those already at the high end of evaluation.

These documents won’t change the world on their own, but we hope they create a rising tide. So many of the forces that influence a nonprofit—funder decisions, government policies, and socioeconomic factors—are subject to rapid and sometimes unpredictable change, so it makes sense to focus on aspects that we can control.

Power to the people

Ultimately, I am much more comfortable with the view that nonprofits should drive their own impact measurement agendas, than a paternalistic view of the world in which only experts carry out evaluations and funders make all the decisions based on evidence they produce. The closer nonprofits are to their beneficiaries, the better able they are to represent them. Good impact measurement will ensure that they remain close, and understand the detail and nuance of their lives. In the end, that’s an approach to raising the bar on impact that helps make us accountable to beneficiaries—the people we’re here to help.

Tracker Pixel for Entry


  • Verity Dimock's avatar

    BY Verity Dimock

    ON July 12, 2013 06:29 AM

    Excellent article and I hope you get lots of feedback on it. When I went to grad school to learn about Performance Measurement our mantra was always, “the purpose of measurement is to help make good management decisions.” Ultimately these decisions should support the clients you serve. In my personal experience helping to manage Canadian non-profits, when you “build measurement in” to what the organization does everyday you not only get improved results for clients, but also for other key stakeholders like funders. In the systemic approach you outline in your visual, the organization, funders and beneficiaries can work together to establish a systems approach to measurement that works for all. Hope you keep writing on this topic.

  • I have been intrigued by the common complex question of “better services” versus “increased funding” or what you call the carrot versus the stick. What drives monitoring and evaluation? Whose agenda should it be.
    Thank you for bringing up pertinent issues on impact measurement. It is an interesting area generating a lot of debate. There has been remarkable improvement (at least in overall) but some work still needs to be done to ensure that the projects being implemented attain intended impacts and that they are correctly measured and for the right reasons.

  • BY Alice Yitian Wang

    ON July 13, 2013 09:05 PM

    We talk a lot about measuring the impact of nonprofit activities, but it’s interesting to point out that we should measure the impact of measurement as well. Thanks for putting your research out there; it’s true we still have a lot to learn about how to do it well.

  • BY Caroline Fiennes

    ON July 14, 2013 11:46 AM

    I don’t quite understand how/where you’re arguing against my position.
    ‘Performance management and using evidence to improve services’ isn’t separate from RCTs (and other rigorous forms of evaluation) or somehow an alternative. You can tell that by the fact that both are done in the vastly more sophisticated and well-resourced world of evidence-based medicine. [Hence Giving Evidence is studying what philanthropy can learn about evidence from medicine, by the way.]
    RCTs are just a fair test of whether an intervention works - that’s all.

    Neither do I argue or say anywhere that rigorous evaluation should be separate from non-profits’ own work nor solely for funders to see/use. For example, Pratham in India routinely gets its innovations RCT’d - by experts who know about those types of research - and it uses the results itself in order to know what to expand, what to bin and where improve.
    That is, rigorous & independent evaluation is integral to ‘performance management’.

  • BY Tris Lumley

    ON July 15, 2013 04:40 AM

    Thanks for these comments, and look forward to connecting further on these issues.

    Caroline - while I’m certain we agree on a great deal about the importance of evaluation, what I think we disagree on is how much doing vs learning frontline organisations should be involved in.

    I don’t want to see all the learning confined to academia, while nonprofits get on with just implementing proven programmes and ‘monitoring implementation’ as you say. For both the purposes of increased accountability to beneficiaries, and increased capacity to learn about and improve interventions, I want to see evaluation and impact measurement embedded in practice as much as possible.

    I think where we really differ is on the relative value of theoretical proof that something works vs the practical knowledge of how to improve it.

  • BY Jo Cavanagh

    ON July 16, 2013 08:48 PM

    What a fabulous discussion.  Can I add a view from 35 plus years practice in the community sector in Australia?

    Impact measurement is a by product of evaluation.  The evaluation results tell us where we are achieving the intended change or outcomes for the beneficiaries. 

    We use the research and evidence to inform the actions / activites we invest in to meet needs and support change.

    Evaluation tells us if these activities “worked” or if we need to do them differently or better, or do something else to achieve the intended change / benefits / outcomes.

    These steps - consulting the research (including RCT’s), planning the activities to deliver the outcomes, evaluating the activities to see if they deivered the outcomes and if we did it well - also should see our professional ethics guiding activities to make sure these are in fact “the right’ things to be doing.

    For example, you can get a result / impact of reducing homelessness in a city by moving people from one place to another.  A more ethical activity is providing housing so people are no longer home- less.  By re-defining “in work” (casual less that 10 hours per week) you can shift people from one category to another, and our employment results just got better!

    A blunt example I know, however, it highlights the importance of ensuring that impact measurement methodologies are in fact grounded in good ethical, evidence informed practice and theories of change which are evaluated.

    And that of course means that our practitioners are criticial as the service providers and data collectors for evaluation, ongoing learning, and evidence to inform decision making for investment of funds.  Performance measurement for impact and investment decisions, in my view, needs to follow this chain of events.  This would of course also mean that funders need to include evaluation in all grant making.

    Thanks for sharing your views and letting me join the discussion!  Look forward to any responses.

  • Alicia McCoy's avatar

    BY Alicia McCoy

    ON July 17, 2013 06:46 PM

    Thank you for this great article.  Nonprofit sectors and organizations around the world can learn greatly from the progress you are making in the UK.

    I would also suggest that it is critical that organizations create the enabling environment for such work to be conducted.  Creating a culture of evaluative inquiry and learning, where impact measurement is viewed as just one part, albeit an important one, of the organization’s approach to quality, effective and accountable service delivery, is important.  As is, despite the reality that accountability to funders is a frequent motivator for impact measurement, seeing investment in evaluation as a form of stewardship i.e. nonprofits also doing evaluation for the altruistic purpose of better serving the public. 

    I am reminded from a quote from James Sanders: “Evaluation must become part of the culture, building a common language.  If these things happen, evaluation behavior will follow.  Changes in values lead to changes in behavior… we are asking people to mainstream a frame of mind, to ask good questions, be skeptical, use answers to bring about change, to continue to move toward excellence in practice”

  • BY emergentutility

    ON July 30, 2013 11:48 AM

    @Tris, thanks for the write-up. Impact measurement, or evaluation process (as Jo puts it), can serve as a significant tool when built into an NGO’s business/operations plan. It seems so simple but when NGO’s begin to act with a higher “business” sense they can better communicate their ROI. By no means am I saying that they need to be businesses. What I am saying is that funders will feel more confident about supporting a project if they know how the project will be internally “audited, examined, and evaluated.” A mantra we have prescribed to much NGO work is “fire, ready, aim.” This can work for a bit while passion and trust are high. But after time things can stagnate and the NGO can appear less effective.

    A plan that outlines the goal, the preparations, and the execution with built in checks and balances has worked in the for-profit sector, let’s see it applied it to the non-profit sector so that more funders want to contribute to significant social returns. It’s all about Aim, Ready, Fire.

  • Lisa Widdifield's avatar

    BY Lisa Widdifield

    ON August 22, 2013 08:17 AM

    What an interesting article and ensuing discussion. I have worked with and in NFP’s in Canada and believe there are many aspects to this discussion that could improve service, evaluation and reporting.
    Most NFP’s cannot financially afford to engage consultants to implement SROI measures or embed appropriate/strategic data collection to improve service and reporting.
    It would be great if there was funding for these measures and then implementation money to respond to the results!
    However, most organization already provide data to funders for reporting purposes. What if funders assisted or funded organizations to interpret the reporting data to provide better results for people getting services? Many organizations do not have the skills to turn data into opportunities.
    Thank you for the insightful article.

  • BY Mathias Craig

    ON August 23, 2013 03:32 PM

    Thanks Tris for this article.  I applaud your work to make performance measurement more prevalent in the social sector.  Caroline’s point about the difference between evaluating ideas and monitoring execution (performance) is a key one.  Using your term, “Impact measurement agenda” but through the lense of her framework (impact = idea x implementation, we can see that an “impact measurement agenda” has two parts - proof that the intervention concept is effective and then performance monitoring of the execution.

    Your closing comment about how you don’t like a paternalistic view of the world where experts provide the evidence ignores the fact that in many of the complex interventions in the social sector, experts really are required to generate the evidence that certain interventions are capable of producing certain results.  It’s expensive, complicated and requires deep expertise to reliably establish causality out in the field where human behavior, politics, the environment work counter to scientific study.  It took experts to prove that smoking causes cancer and the same is true of many social interventions.  I don’t see any way around this.  I like your vision of individual organizations having ownership of their performance monitoring and see this as a nice compliment to relying on experts to produce the evidence that the organizations performance will in fact produce any meaningful impact in the end.

  • BY Rick Groves

    ON August 23, 2013 03:43 PM

    Until and unless funders decide to demand that nonprofits invest in performance management itself, including but not limited to systems development and training on data analysis and interpretation, nonprofits will continue to invest their precious dollars in those things which provide the greatest short-term benefit, heart-string tugging stories and high-level impact data.

    The existing funding dynamics simply do not encourage nonprofits to make investments for the long haul.  Too many funderswant to be shown over and over again, year after year, that a program works—and get fresh new stories they can put on their own website continuous improvement in program that have already shown the proof points.

  • Its like you read my mind! You seem to know a
    lot approximately this, such as you wrote the ebook in it or something.

    I think that you could do with a few p.c. to drive the message home a little bit, however instead of that, this is
    wonderful blog. A great read. I’ll certainly
    be back.

  • Terry Brown's avatar

    BY Terry Brown

    ON October 24, 2014 06:47 PM

    Organizations working on non profit basis are entirely different from other made-for-profit organizations as the basis purpose of first one is to provide maximum value to its end user in the form of products or services. A company dealing in anti aging skin care products can heavily rely on marketing or sales to achieve desired goals but nonprofit organizations have limited budgets. Regards,

    T Brown

  • Michael Thomas's avatar

    BY Michael Thomas

    ON February 24, 2015 12:22 AM

    There is no doubt that key to progress for nonprofit is in embedding measurement in practice. If we want impact measurement to result in improved services and increased impact, then we have to make sure it works for the nonprofit. I think similar concept was offered by Elizabeth Byrnes few years ago. Most organization already provide data to funders for reporting purposes but they don’t have the capability to turn this data into opportunities. Thanks

Leave a Comment


Please enter the word you see in the image below:


SSIR reserves the right to remove comments it deems offensive or inappropriate.