Illustration of data icons passing on a conveyor belt in front of the US Capitol (Illustration by Hugo Herrera)

One of the great—if largely unsung—bipartisan congressional acts of recent history was the passage in 2018 of the Foundations for Evidence-Based Policymaking Act. In essence, the “Evidence Act” codified the goal of using solid, consistent evidence as the basis for funding decisions on trillions of dollars of public money. Agencies use this data to decide on the most effective and most promising solutions for a vast array of issues, from early-childhood education to environmental protection.

Five years later, while most federal agencies have created fairly robust evidence bases, unlocking that evidence for practical use by decision makers remains challenging. One might argue that if Evidence 1.0 was focused on the production of evidence, then the next five years—let’s call it Evidence 2.0—will be focused on the effective use of that evidence. Now that evidence is readily available to policymakers, the question is, how can that data be standardized, aggregated, derived, applied, and used for predictive decision-making?

Introducing the Field of Impact Sciences
Introducing the Field of Impact Sciences
This series, sponsored by the Center for Impact Sciences at the University of Chicago, is an exploration of the cutting edge of data and measurement: how new tools, systems, and technologies are making it possible to look forward and predict impact.

In the following conversation, two expert leaders—Nick Hart, president of the Data Foundation, and Jason Saul, founder and executive director of the Center for Impact Sciences at the University of Chicago’s Harris School of Public Policy—share thoughts about the next phase of the evidence movement.

Q. Can you summarize for us the goal of Evidence 2.0?

Nick Hart: It’s all about using the data. Evidence 1.0 is great: we’ve generated a wealth of better knowledge, and that is fantastic. But the real point is to make all that knowledge accessible and usable, so that our policymaking is better informed. It doesn’t make any difference if you’ve got the best study in the world but nobody uses it. We want all this research and evaluation to be open and understandable to all. That’s the goal!

Q. Jason, how do we get there?

Jason Saul: The crux of the issue is unlocking the data. We’ve generated hundreds of thousands of pieces of “evidence”—evaluations, research studies and control trials published in PDFs. But there’s a pretty big difference between “evidence” and actionable data. Every piece of evidence uses different terminology and data definitions. The data are not coded in any standardized way; we have no common indexing or taxonomies for impact. Look at what Google did for website indexing, look at what Westlaw did for indexing case law, look at what the Human Genome Project did for indexing genetic research. We need an “impact index” to do the same for social science research.

Q. Is the government doing that?

Nick Hart: The Evidence Act actually set the stage for this via the data governance processes. One example: Congress passed another law in 2022 called the Financial Data Transparency Act that clearly says: publish financial information as searchable data, not just as written reports. We have to do that across the board. It's a hugely exciting opportunity for the government to build public trust in government institutions, in data, in evidence, and in the ability to communicate better with the American public using tools that are available to everyone today. That’s the democratization of data. Some government agencies are doing well, but many have a long ways to go. It’s like changing the course of a large ship.

At the same time, it’s critical for the government to make rapid progress because of all the rapid advances in artificial intelligence. If we don’t, AI could still mine what’s out there and produce misinformation, disinformation, confusion. That would be Evidence minus-1.0, and nobody wants that!

Q.: Jason, are you similarly optimistic?

Jason Saul: I am optimistic, but impatient. The federal debt stands at $32 trillion and growing, with historic levels of investment in social programs. Yet we are not seeing the return on that investment. I would argue that we don’t have a “resource” problem, we have a “resource allocation” problem—we still don’t know what works and where to place the right bets. In a prior life I was a public finance attorney, and I always wondered why there were no bond ratings for ‘impact’—i.e. how many units of housing, education, employment, food security, crime reduction is generated per dollar spent? And is that a good return? What if suddenly the ratings agencies start saying, “Hey, all those municipal bonds are going to be rated based on outcomes, not just ability to repay”? This type of market driver would increase demand for evidence and data because there are financial consequences for results. We need to find better ways to connect evidence to finances.

Q. Is that feasible?

Nick Hart: It certainly is. What we have now is an expectation of evaluation for the purposes of learning. That's what the Evidence Act does. And it’s having a cascading effect on state and county and local agencies. That's exciting. But the thing I would say about Evidence 2.0 is we don't know all the answers. This is the beginning of a conversation and we should invite the broader community to be part of it. Nonprofits, donors, recipients: they should all speak up about how to measure success, so we can figure out where to go next and reach that point together.

Q. Is there indeed a role for community-based organizations and other local nonprofits to play in getting to Evidence 2.0?

Jason Saul: Nonprofits are a crucial part of the ecosystem, so they must be part of the evidence conversation. Nonprofits will benefit from data standards and more rational decision making. In fact, I would argue that nonprofits have been victims of an “evaluation industrial complex” that is biased against smaller organizations which can’t afford pricey evaluators. Just because a social program doesn’t have an evaluation doesn’t mean that it's not “evidence-based” or effective. In a way, the current evidence standard structurally marginalizes smaller, community-based organizations that may in fact be highly effective. But because the definition of “evidence-based” is so limited, we are forcing policymakers to make limited choices.

Q. Of course, evidence of past results isn’t proof of future success. How do you factor that in?

Nick Hart: That’s true, but then you might question whether evidence is even the right word here, because everybody else uses probabilistic modeling in decision-making. We look at the best data we have at hand and we forecast the probability of success—and then we adjust, hopefully quickly, if there isn’t success. No trader on Wall Street is making evidence-based decisions. They make probabilistic decisions. But they use comparisons, benchmarks and all kinds of other measures to do so. We have the muscles to do that in government, but I just don’t think we were systematically applying them. That’s what the Evidence Act is all about. I want people to realize that: because of the Act, we’ve become more sophisticated in measuring impact.

Q. What are some “real world” ways of making the general public aware of what works and what doesn’t?

Jason Saul: There should be a common outcomes taxonomy for tagging all federal programs. The average person should be able to look at how much we’ve spent on each outcome that matters to them, understand the “cost per outcome” for every funded program, and see how that compares to others. The "data" that we make available to the public is of little value to discerning what works. For example, the U.S. State Department reports on over 250 “f-indicators” for foreign assistance, such as "number of first responders trained on victim identification," "percent of audience who recall hearing or seeing a specific USG-supported family planning/reproductive health (FP/RH) message," and "number of investments in the digital ecosystem." Tough for any taxpayer (or legislator) to make sense of that kind of data.

Also, we need to engage the capital markets better, and create data that they can understand and use on Wall Street. We need to build financial incentives for impact into municipal bonds, and also enable economists to start forecasting the ROI on government spending based on outcomes and real-world impact. All of a sudden you create a whole new motivation for policymakers and legislators to care about evidence. So, you know, I'm already thinking about Evidence 3.0!

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Nick Hart & Jason Saul.