Foundations collect reams of data about how the issues we fund changing, what other funders are doing, and how grantees are performing. We do this to hold our grantees and ourselves accountable for what we set out to do, and to surface opportunities for improvement and increased impact. Yet we dedicate little space to reflecting on and making sense of the data, or evolving and adapting our strategies; instead, we limit these practices to perfunctory meetings and stuff them into already full workloads.

At the Packard Foundation, we rely on our program staff members to act as trusted and intelligent filters for many information sources. They listen closely to grantees and partners working in the same fields, and tap into a variety of inputs—both qualitative and quantitative, including third-party evaluations, external research, and site visits—to make well-reasoned decisions that help drive positive change. We studied how Packard staff take in, make sense of, and use all of this strategic information, and found that while our processes for effective and efficient grantmaking transactions are many and strong, our supports for evolving our strategies and using evaluative information are few.

The Value of Strategic Planning & Evaluation
The Value of Strategic Planning & Evaluation
In this ongoing series of essays, practitioners, consultants, and academics explore the value of strategy and evaluation, as well as the limits and downsides of these practices.

To be clear, we have well-established practices for tracking progress and for working with third-party evaluators to analyze and judge how we’re doing. But too often the motivation behind these activities are internal reporting requirements or a sense that we ought to engage in a systematic analysis of how we’re doing because its good practice, rather than an important learning need. As a result, the data we collect and lessons we learn fail to deliver value to the program staff on the front lines, who are making day-to-day decisions about resource allocation. We can do better.

What would it look like if we truly used our data to generate meaningful insights, and then used those insights to drive better decisions and greater impact?

No doubt, user-friendly tools for managing and sharing data and information are part of the equation (at Packard we’re working on building such a platform with Fluxx Labs). It’s also important to have access to good data. There are a number of promising efforts underway to aggregate and improve data for social good, including the Gates Foundation’s Grand Challenge for increasing the interoperability of social good data, and Foundation Center’s IssueLab, Philanthropy In/Sight and Reporting Commitment.

But even if we have fabulous tools and accessible, usable data, we won’t be truly data informed unless we evolve our habits around learning from that data.

We do not need more internal requirements and processes. We need more urgency. The great organizational psychologist Edgar Schein argues that true organizational learning and change happens only when there is a real threat of pain. Schein says: “Anxiety inhibits learning, but anxiety is also necessary if learning is going to happen at all. But to understand this, we're going to have to speak about something managers don't like to discuss—the anxiety involved in motivating people to ‘unlearn’ what they know and learn something new.”

How might we generate productive anxiety and create spaces where funders are using their abundant data to generate meaningful insight to inform better action? Here are a few things I’ve learned through trial and error over the past couple of years as Packard’s evaluation and learning director:

  1. The issues we are tackling and the communities we’re working to benefit need to drive learning, not foundation bureaucracy. At Packard, I have found that the more we try to formalize and “require” learning, the less applicable insight and strategic value we produce. While leaders must be champions of organizational learning, we can’t be too prescriptive about when and how it happens.
  2. Our evaluations need to address important learning needs and surface timely insights if they are to serve as real inputs to strategy evolution. Too often, mandatory evaluations end up recapping what program staff already know and/or the results come too late to inform decision-making. The good news is that we’re seeing more emphasis on use-driven evaluation; for example, there are growing bodies of work around developmental evaluation and strategic learning.
  3. We simply need to carve out the space for reflection and learning. Program staff members interact with massive amounts of data on a daily basis—data that they’re formally collecting, insights generated through their interactions in the field, and publically available data sets that we can only expect to grow. But we aren’t reserving ample time reserved or prioritizing sense-making. At Packard we’re experimenting with creating more intentional spaces so that we can learn and unlearn. The tricky part is making—not mandating—space and staying connected to the pressing (and anxiety producing) issues in our fields.

Those are just a few starting points. I’m sure there’s a lot more we can do to provoke learning, including using what we know about cognitive biases, and thinking creatively about integration with workflow. What do you think? How else might funders generate productive anxiety? And do we need it?

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Diana Scearce.