When I was a charity CEO, we approached a family foundation. There was no formal application process. Instead, we had to write various emails, and I had to attend various meetings (not unusually, the foundation wanted to see only the CEO, the highest paid staff member). A physicist by background, I kept a tally of the time all this took and the implied cost. Eventually we got a grant, of £5,000. This required that we (I) attend more meetings—for “grantee networking,” meeting the family, and so on. We noted the cost of those too. Towards the grant’s end, the foundation asked us to compile a short report on what we’d done with the grant. By now, the tally stood at £4,500. I felt like saying: “What grant? Honestly, you spent it all yourselves.”

One hears worse. A physicist at Columbia University has calculated that some grants leave him worse off. And I’ve heard of a heritage funder requiring that applications have input from consultants; this made the cost of applying £100,000, though the eventual grant was just £50,000.

Clearly it’s important for any organism to learn, adapt, and improve. Much of the discussion about how funders should do that, and the tools available to them, revolve around “measuring impact.” But measuring impact is complicated—perhaps even impossible. I wonder whether, in our quest for the perfect measure of performance, we overlook some simpler but nonetheless useful measures, such as whether a funder is essentially spending a grant on itself. As Voltaire warned, perfect is the enemy of the good.

Let’s look at why getting a perfect measure is so hard, and then at some simpler “good” tools.

Funders: Don’t measure your impact ...

Are you enjoying this article? Read more like this, plus SSIR's full archive of content, when you subscribe.

A funder’s impact is the change in the world that happened that would not have happened otherwise. Making a perfect estimate of impact is difficult for two reasons.

First, most funders support work that is too diverse to aggregate its effect. Hence, articulating or identifying “the change that has happened” can be impossible.

Second, there normally isn’t an “otherwise” that we can compare with reality. Constructing an “otherwise,” or counterfactual, would be very difficult; it would require comparing achievements of grantees with non-grantees. Ensuring that the groups were equivalent would require that the funder choose between eligible organizations at random, which few would be willing to do. And to establish that the funder rather than other factors (such as changes in legislation or technology) caused the change in the world, both groups would need very many organizations. And again, the heterogeneity of work may prevent comparisons of the two groups’ results anyway.

Many funders give up. A recent study found that, though 91 percent of funders think that measuring their impact can help them improve, one in 5 measures nothing pertaining to their impact at all.

... rather, understand your performance.

Compared to this complexity, seeing how a funder can save time and money for applicants and grantees looks like child’s play. In fact, it may be an even better thing to examine, because it shows pretty clearly what the funder might change. BBC Children in Need (a large UK grantmaker) realized that getting four applications for every grant was too many (it imposed undue cost), so it clarified its guidelines to deter applicants unlikely to succeed.

Giving Evidence has found several such tools in our work with donors (collated in a white paper released this week); each is relatively easy and gives a valuable insight into a funder’s performance. We make no claim that these tools provide the perfect answer, but we’ve seen that they are all good and helpful for ambitious donors wanting to improve:

  • Monitoring the “success rate”—the proportion of grants that do well, that do all right, and that fail. Though clearly the definition of success varies between grants, presumably funders craft each one with some purpose; this tool simply asks how many grants succeed on their own terms. Shell Foundation found that only about 20 percent of its grants were succeeding. This pretty clearly indicated that it needed to change its strategy, which it did, eventually doubling and then tripling that success rate. It’s unashamedly a basic measure, but then it’s hard to argue that a funder is doing well if barely any of its grants succeed.
  • Tracking whether “the patient is getting better”—whether that means biodiversity is increasing around the lake or whether malaria decreasing in prevalence. This of course indicates nothing about cause. But sometimes funders find that their target problem has gone away, or moved, or morphed, and they should morph with it.
  • Measure the costs that funder application and reporting processes create for nonprofits. The prize here is huge: It’s estimated that avoidable costs from application and reporting processes in the UK alone are about £400 million a year.
  • Hear what your grantees think. Grantees can’t risk offending organizations that they may need in future, so funders need to ask. Listening to beneficiaries and constituents benefits medicine, public services, and philanthropy.
  • Clarify what you’re learning, and tell others. Engineers Without Borders finds that its annual Failure Report—a series of confessions from engineers in the field—is invaluable for internal learning and accountability. Funders pride themselves on taking risks and many programs just don’t work out; there shouldn’t be shame in learning.

We hope that these tools are useful and that funders use them, and we welcome any discussion.

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Caroline Fiennes.