I think it’s worthwhile to return to Alana Connor Snibbe’s evocative Stanford Social Innovation Review article “Drowning in Data” in which she observed, “Nonprofits often are collecting heaps of dubious data at great cost to themselves and ultimately to the people they serve.”

Why all this data collection?

Here we encounter a paradox: On the one hand, very few funders or the grantees they support have worked out clear and measurable objectives and indicators for assessing success. On the other hand, nonprofit agencies and their funders are earnestly collecting and reporting on mountains of data in the pursuit of … what? A frequent reply is: accountability.

But consider the following examples of frequently collected data:

Are you enjoying this article? Read more like this, plus SSIR's full archive of content, when you subscribe.

Turnstile numbers. Many organizations—with the encouragement of their funders—commit significant staff time and resources to count each and every person with whom they have any kind of contact or whom they have “touched.” They also track activities and events. But to what end?

Turnstile numbers are little more than a way of suggesting what some call “scale.” But even for this limited purpose, turnstile numbers are weak—after all, “touches” via a website visit or registering people for programs that they might attend once or twice or not at all rarely will add up to much that is of value. So turnstile numbers alone most certainly don’t tell the story of what an organization of significance is doing—let alone what it is accomplishing. Lots of counting, little meaning.

Outcomes. Over the past several decades, pressured by both the Federal Government through the Government Performance and Results Act (1993) and its successor the Government Performance and Results Modernization Act (2010), as well as the efforts of influential funders such as the United Way, public and social sector agencies have been working frantically and at considerable cost to collect outcome data. These data too are reported proudly and publicly. More counting—but also more meaning?

Outcome data do tell more of a story than turnstile numbers and are essential to managing performance day to day. But beware. Judith Gueron’s Stanford Social Innovation Review article “Throwing Good Money After Bad” describes the outcome data for programs seeking to help single mothers get off welfare by preparing them for employment and getting them jobs. The outcome data for the program in Grand Rapids, Mich., showed 76 percent of its participants working. The outcome data for Atlanta, Ga., were 57 percent; and for Riverside, Calif., they were 46 percent. Looks good for the Grand Rapids program—until you compare their outcomes with the outcomes for a randomly selected “control group” from the same city, which showed 61 percent working. In Atlanta, 53 percent were working, and in Riverside, 38 percent. So which program had the greatest impact? Which created the most social good? Clearly it was Riverside, which improved the employment rate of its participants by 8 percent (versus 6 percent for Grand Rapids and 5 percent for Atlanta). Collecting and reporting outcome data alone, although a triumphantly quantitative exercise, does lead to a questionable precision when it is used to assert the effectiveness of a program and promote its claims of success.

To be clear: For the purpose of quotidian performance management it is essential to measure and monitor activities and associated outcomes—especially short-term outcomes. But by itself, in the end, doing only this is not sufficient to produce or demonstrate the creation of social value.

So what is to be done? Is there a way to navigate between the crushing Scylla of necessary accountability and the engulfing Charybdis of mind-numbing over-counting? As I argue in Working Hard—and Working Well: A Practical Guide to Performance Management (a companion to Mario Morino’s Leap of Reason: Managing to Outcomes in an Era of Scarcity), the answer is for nonprofit organizations and funders to invest in, build, and use strong internal performance management systems to monitor, learn from, and improve the quality and effectiveness of their work.

Performance management: keep it simple

Here is where some solid, useful advice can make all the difference, starting with the military acronym KISS (keep it simple, stupid). If an organization really knows what it is doing and why—that is, if it has a theory of change that is appropriate to the nature of what the organization wants to accomplish, meaningful (to key stakeholders), plausible (in that it makes sense to both stakeholders and key experts), doable (within the resources and capacities of the organization and, perhaps, its strategic partners), and assessable (with measurable indicators of progress and success)—then designing simple, focused (and limited), useful, performance metrics really isn’t forbiddingly hard.

Once practitioners adopt such ways of thinking, they can use their knowledge, experience, and skills to develop a few indicators to track that help them assess whether their programs are working (output indicators) and succeeding (outcome indicators) as intended. Performance management within a theory of change framework is at least one way to navigate between over-measurement and the need for accountability, and to grapple with what is real and what is illusory. It also provides an operational context for funders and not-for-profit organizations to agree on simple and useful metrics that can inform evaluative learning and point to the social benefits of grantees’ efforts and funders’ investments (grants).

But agencies should develop such operational frameworks regardless of funders’ demands. They should do so because otherwise their performance will be weak, their success questionable, and ultimately their value to the people they serve minimal.

But enough talk. In part two of this series, I offer practical advice to help organizations implement results-focused performance management.

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by David E. K. Hunter.