Jodi Nelson, director of Strategy, Measurement, and Evaluation at the Bill and Melinda Gates Foundation.

I recently had a chance to sit down with Jodi Nelson, director of Strategy, Measurement, and Evaluation at the Bill and Melinda Gates Foundation. Given the foundation’s size and breadth, I was eager to learn how it confronts complex choices around structuring its M&E function, working with grantees on M&E, and assessing its own impact. 

Jodi joined the foundation in 2009 and became its head of measurement roughly eighteen months ago. Previously, she was the director of Research and Evaluation at the International Rescue Committee.

Matthew Forti: What are the key responsibilities of your team at the foundation?

Jodi Nelson: I lead a six-person team at the foundation with a recently renewed mandate to strengthen measurement and evaluation (M&E). We work to create the system that enables staff and grantees to plan and measure their work effectively. Right now, we’re focused on four core objectives. First, we want to ensure that the tools, capacity, and support are in place for program officers and partners to design grants that are focused on measureable outcomes. Second, we are working with a core team of staff and leaders across the foundation to define an evaluation policy that will set common expectations and standards for how we define and conduct evaluations: When do we evaluate? Which approaches do we privilege and why? What we do with the findings? How much money do we spend? This is my favorite piece of the work we have on our plate, as I think it will not only help to build evidence of what works, but also serve as an essential tool in strengthening our grantee relationships. Third, we convene the foundation’s M&E staff in a community of practice. My team is responsible for creating the space, time, and opportunities for us all to build a strong professional environment where people learn from each other and share innovation from inside and outside the organization. Finally, we are working with the foundation’s IT group to create the infrastructure to store, organize, and analyze contextual and results data so that leadership and program staff (and eventually grantees!) can access a common source of data for learning and decision making. Collectively, I would describe our job as solving for the “public goods” problem—we do what is inefficient for individual M&E staff to do on their own and what is necessary to ensure that the foundation institutionalizes an effective system that lasts and enables excellent M&E.

Are you enjoying this article? Read more like this, plus SSIR's full archive of content, when you subscribe.

A few years ago, you co-authored a manifesto of sorts called “A Guide to Actionable Measurement,” in which you codify why and how the foundation measures. What was the most important principle to emerge from that work?

I would say there are two—one related to how to improve practice in a complex institution, and another that is substantive and related to good evaluation. Inspired by the co-chairs’ preference that the foundation measure strategically and to inform action—think of the question “What would you do differently if you have that data?”—actionable measurement privileges purpose over methodology or evaluation design. Purpose is about how you intend to use the information you gain from measurement to do something differently or effect some change. For example, you might want a government to replicate a program that was done in one context to another context. In this case, you should have pretty rigorous causal evidence from enough settings (and a diversity of settings) to demonstrate that the model works, is cost effective, and is replicable. This is where experimental design is an appropriate—or rather, necessary—evaluation design, complemented by sufficient process evaluation to understand what actually happened on the ground. Other times, your primary purpose might be to refine and improve an intervention or program and make sure you have sufficient data for management. In this case, strong performance monitoring and process evaluation would be much more appropriate. We think it’s important that rigor not be construed as “random assignment.” It’s more about being purpose driven and disciplined about how you use any evaluation design: Is it the right design for the decision you need to make? Is it telling you what you need to know to do something differently? I think we spend too much time debating philosophies. For me, the more pressing questions are about how to design development programs – or grants, or projects, or policies – so they are set up to achieve measureable outcomes that matter for people and how to use M&E to get closer and closer to those outcomes. The genuine gold standard in this case is use not a particular evaluation design.

The other principle I want to mention is about reporting burden. We have a real responsibility to be disciplined about what we ask our grantees to do in M&E and, in turn, what our grantees ask their beneficiaries to do. Rich or poor, no one likes to spend hours of time answering questions and completing surveys. This again goes back to clarity around purpose—we turn too quickly to surveys without fitting them into an appropriate evaluation design that serves a distinct purpose. 

What kind of measurement most enables the Gates Foundation to continuously improve?

If you look at the most relevant data for the work that we actually do as grantmakers, the variable that we can most clearly control and adjust, it is the relationships we have with grantees and partners. This is the vehicle through which we achieve our impact. Four years ago we began surveying all of our active grantees through a process led by the Center for Effective Philanthropy, which resulted in our first organization-wide Grantee Perception Report. Our CEO has made responding to this feedback a top organizational priority—and you can really see the change that has resulted in how we communicate and work with grantees. For instance, we now have a dedicated team working on strengthening grantee relationships, senior leaders—our CEO and presidents—hold annual “grantee community calls,” and teams (for example, Agriculture, HIV, College Ready) have individualized plans to ensure that they are building solid relationships with their grantees.

With such a huge breadth of strategies and initiatives, how does the foundation assess the impact it is having as a whole?

I think it's a fool’s errand to try to add up and compare your work when you are funding across so many different fields, where success is measured in such different ways. It may make sense to do this if you want to tell a story about your work, but who is the story for? For learning and decision-making, it’s much more pertinent to focus measurement at the level at which your work is planned (for us, initiatives or portfolios of grants, as well as individual and particularly important investments that are expected to be especially catalytic). Good measurement requires clear, concrete outcomes and good planning; I’m not convinced any organization—especially a complex one—really plans to achieve outcomes that require all its pieces to line up just so. 

What is the most important thing you have personally learned about measurement and evaluation over the past eighteen or so months in this role?

The biggest learning for me is that my job is more about organizational change than it is about being an evaluation expert. An organization can have great M&E people and expertise, as we do, but it won’t actually lead to anything unless there’s alignment up and down the organization around what enables success. Some examples include: leaders that continually ask their teams to define and plan toward measureable outcomes, consistent expectations for staff and partners about what constitutes credible evidence for decision making, executive leaders who understand and sponsor change that can be tough and take a long time, and tools and resources for staff and grantees to integrate rigorous planning and M&E into their day-to-day work.

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Matthew Forti.