Solving Public Problems: A Practical Guide to Fix Our Government and Change Our World

Beth Simone Noveck

448 pages, Yale University Press, 2021

Buy the book »

Bill Clinton famously said, “Nearly every problem has been solved by someone, somewhere. The challenge of the 21st century is to find out what works and scale it up.” Good evidence is the cornerstone of public problem solving and social innovators and public servants need a systematic strategy for efficiently scavenging for solutions.

There is no shortage of information or ideas out there. This is why it is vital to have agile methods for scavenging for solutions, evaluating whether a solution worked “there” and determining if a solution that worked there will work “here” in our context and community. The level of research rigor expected of a graduate student is an unaffordable luxury for an activist, social innovator, or public servant who may only have a few days, especially in a crisis, and yet she must avoid reinventing unnecessary wheels. 

While we may start by looking at university-based empirical studies to de-risk the choice of solution, the effective changemaker must go beyond academic publications. In addition to formal evidence-based academic studies of interventions, there is also an exploding number of practical pilots, entrepreneurial experiments, positive deviance (people whose success against the odds makes it possible to spot unusual solutions), and open innovation challenges. Such innovations from the field that are not the subject of academic study also merit review because they work in the real world for real people who use them.

Unfortunately, advocates of evidence evaluation, especially RCTs—what I nickname the randomistas—diverge from the contestistas, who focus their enthusiasm and energy on the use of incentive prizes to spur social and policy innovations. Both groups are growing but growing apart. Thus, in Solving Public Problems, I offer a concise and participatory set of methods and tools for efficiently expanding our toolkit of solutions by mining the treasure trove of academic and social innovation solutions.—Beth Simone Noveck 

* * *

The desire for solutions that work is driving greater use of RCTs as proof of the impact of behavioral interventions. In an RCT, one group of individuals (called the treatment group) receives an intervention, and the other (the control group) does not. People are randomly assigned into one of the two groups, and ideally, neither participants nor researchers know who is in which group.

By comparing two identical groups chosen at random, we can control for a whole range of factors that enable us to understand what is working and what is not. This is why many people consider RCTs to be the gold standard of evidence. The awarding of the 2019 Nobel Prize for Economics to Michael Kremer, Abhijit Banerjee, and Esther Duflo for their experimental work alleviating poverty has brought even greater attention to RCTs.

Where to Find RCTs and Their Reviews: Evidence Clearinghouses

A number of open repositories have emerged to make it faster to find solutions backed up by RCTs. These evidence clearinghouses produce what are known as “systematic reviews” of primary studies. A systematic review is a review that attempts to collect all empirical evidence according to defined criteria. Systematic reviews “sit above” RCTs with regard to confidence in findings, as they consolidate multiple RCTs.

Those criteria are intended to reduce bias, systematize and organize the evidence, and provide reliable findings on which to base decisions. There are now tens of thousands of systematic reviews, and using what the evidence-based policy expert Peter Bragge calls this “chronically underused asset” as your entry point can take much less time than scouring the eighty million individual research studies published since 1665!

For example, the American What Works Clearinghouse (WWC) provides reviews of interventions in education and evaluates whether the research was credible and whether the policies were effective. The WWC catalogs more than ten thousand studies of educational interventions It reviews the studies of programs (not the programs themselves). Hundreds of trained and certified reviewers evaluate these research studies according to set standards and then summarize the results. The WWC also publishes practice guides for practitioners, developed by an expert panel. The Clearinghouse for Labor Evaluation and Research (CLEAR) in the United States does the same for workforce-development topics.

In the United Kingdom, the What Works Network comprises ten similar centers that review studies of social interventions and create a systematic way to find evidence. Centers include the National Institute for Health and Care Excellence (NICE) and the Centre for Homelessness Impact. In addition to rating and ranking studies, they help UK policy makers with their searches and create toolkits and other accessible products designed to make solutions more easily findable.

In Australia, BehaviourWorks Australia at the Monash (University) Sustainable Development Institute combines evidence reviews with searching for broad solutions and doing experiments. It conducts rapid evidence reviews in time frames of three to eight weeks by drawing on an in-house database of five thousand studies. When the Australian state of New South Wales was deciding whether to implement a container deposit law (ten cents back for every can or bottle returned), BehaviourWorks reviewed research and data from forty-seven examples of such schemes around the world. It found that on average, the programs recovered three-quarters of drink containers. In 2017, New South Wales rolled out its Return and Earn deposit scheme.

Health Systems Evidence (HSE) provides systematic reviews and evidence briefs for Canadian policy makers, but it also contains a repository of economic evaluations and descriptions of health systems and reforms to them. HSE’s sister project, Social Systems Evidence, expands access to evidence reviews in twenty areas of government policy, including climate action, social services, economic development, education, housing, and transportation. The Minister of Health in Ontario, Canada now requires any policy memo proposing a new intervention to include a search of one of these two databases in order to demonstrate that the proposal is grounded in evidence. (For a complete list of evidence clearinghouses, see solvingpublicproblems.org.)

Randomistas Versus Contestistas: The Limits of RCTs

Social scientists who either run experiments or conduct systematic reviews tend to be fervent proponents of the value of RCTs. But that evidentiary hierarchy—what some people call the “RCT industrial complex”—may actually lead us to discount workable solutions just because there is no accompanying RCT.

A trawl of the solution space shows that successful interventions developed by entrepreneurs in business, philanthropy, civil society, social enterprise, or business schools who promote and study open innovation, often by developing and designing competitions to source ideas, often come from more varied places. Uncovering these exciting social innovations lays bare the limitations of confining a definition of what works only to RCTs.

Many more entrepreneurial and innovative solutions are simply not tested with an RCT and are not the subject of academic study. As one public official said to me, you cannot saddle an entrepreneur with having to do a randomized controlled trial (RCT), which they do not have the time or know-how to do. They are busy helping real people, and we have to allow them “to get on with it.”

For example, MIT Solve, which describes itself as a marketplace for socially impactful innovation designed to identify lasting solutions to the world’s most pressing problems. It catalogs hundreds of innovations in use around the world, like Faircap, a chemical-free water filter used in Mozambique, or WheeLog!, an application that enables individuals and local governments to share accessibility information in Tokyo.

Research funding is also too limited (and too slow) for RCTs to assess every innovation in every domain. Many effective innovators do not have the time, resources, or know-how to partner with academic researchers to conduct a study, or they evaluate projects by some other means.

There are also significant limitations to RCTs. For a start, systematic evidence reviews are quite slow, frequently taking upward of two years, and despite published standards for review, there is a lack of transparency. Faster approaches are important. In addition, many solutions that have been tested with an RCT clearly do not work. Interestingly, the first RCT in an area tends to produce an inflated effect size.

Moreover, limiting evidence to RCTs may also perpetuate systemic bias because of the underrepresentation of minority researchers and viewpoints in traditional academe and philanthropy. Limiting evidence, too, to that which is developed by academics, instead of by communities, is biased. Pushing decision-makers to rely only on RCTs could cause them to overlook important solutions developed by communities for communities.

And whereas RCTs answer the question “Did it work?” (and sometimes not even that), they do not explain how it worked or how satisfied people are. These are better measured using qualitative techniques such as structured or unstructured interviews.

An intervention might even “fail” an RCT but still be promising. In the 1990s, the US federal government did an experiment in housing policy by giving some families a rental subsidy if they moved from a higher- to a lower-poverty neighborhood. The results of the initial RCT found that “moving out of a disadvantaged, dangerous neighborhood into more affluent and safer areas does not have detectable impacts on economic outcomes four to seven years out.” The Moving to Opportunity project would have seemed to be a failure. In fact, subsequent research a decade later reveals that these relocation programs had profound and positive economic benefits for the children of the families that moved.

Furthermore, RCTs test only very small and incremental experiments, like whether to give away a kilogram of lentils. Sometimes we need to be radical and try bigger things than we can measure with an RCT. For example, Professors Kevin Boudreau and Karim Lakhani, of Northeastern and Harvard Business School, respectively, have spent the past decade trying to develop experimental approaches to empirical measurement of how institutions make decisions and solve problems.

Successful RCTs, especially as evaluated through a third-party systematic evidence review, are only one way to find and evaluate solutions. In Solving Public Problems, I explore methods and tools for conducting a rapid field scan that goes beyond systematic reviews of RCTs to include experience-based learning from both documents and people.

These begin with cataloging “generic solutions.” This generic list captures the standard solutions—the acceptable Overton Window of options—as well as new strategies enabled by the availability of technology and data such as digital dashboards, advanced market mechanisms, behavioral insights, or new tools like chatbots.

Solutions surface in two ways: from documents and from people. In Solving Public Problems (and its free companion website solvingpublicproblems.org), I outline a framework for learning from documents, first, including tips for how to search academic literature, including both qualitative and quantitative studies, beyond RCTs and take advantage of tools such as Google Scholar. I discuss a wide variety of documentary sources from polling organizations to YouTube, and how to set up a news feed to send you the news stories relevant to your query at the frequency you want. Examples of documentary sources include government sources like the Congressional Research Service or the OECD and nongovernmental sources like the National Academies as well as how to make use of the grey literature coming from the 1872 think tanks in the United States.

Second, I explore how to learn from people. The public problem solver should not just ask “What solutions are out there?” but “Who is out there?” Informed people are the fastest shortcut to learning what has been tried and what is working. Therefore, it is helpful to have checklist for answering: Who is doing the work that we might connect with to learn about their experiences, steal a page from their playbook, and possibly collaborate.

They can help you accelerate your learning and get a handle on the solution space (including pointers to relevant RCTs and grey literature). You can save time and effort by finding people with the experience, know-how, further suggestions for what else to read and how to obtain those materials, and introductions to other people working in the field and I go into detail about how to create a map of relevant organizations and experts.

Thus, I explore categories of professionals from university experts to civil servants and clever tricks for using technology to identify them quickly, such as by subscribing to existing or creating new Twitter lists. Identifying relevant hashtags can also be another excellent shortcut for finding people and content.

We also discuss question-and-answer sites like Quora and Reddit as places to find answers and expert networks like Scholars Strategy Network, set up by the Harvard professor Theda Skocpol in 2011. Scholars Strategy Network is a nonprofit network of seventeen hundred academics interested in advising policy makers. VIVO is another easily searchable online knowledge network, in the case of scientific experts, primarily from the biomedical sciences. The Conversation offers another way to find who’s who in academe.

Using Community Engagement to Scavenge for Solutions and Evaluate What Works

In the absence of a clearinghouse or what-works database—and even with those resources—and in order to evaluate nontraditional, nonexperimental evidence, you can leverage community wisdom to conduct and evaluate your field scan and determine what worked from the perspective, not of researchers, but of the people involved as well as to understand whether a solution that worked “there” will work “here.”

In 2019, the United Nations’ International Labour Organisation collaborated with The GovLab to do a survey of successful innovations that used technology and data science to improve the regulation and enforcement of worker and workplace protections. The GovLab convened more than sixty professionals, who helped us to identify and assess dozens of promising innovations. We then further evaluated those solutions using a structured interview process.

Deliberative dialogues can be another way to engage people in mining for evidence. The health-policy expert John Lavis, creator of the Health Evidence Systems Clearinghouse at McMaster University used small groups of citizens to weigh evidence in Africa. In a pilot project in Burkina Faso, Cameroon, Ethiopia, Nigeria, Uganda, and Zambia, 530 individuals were given evidence briefs to read—systematic reviews covering interventions on seventeen topics ranging from preventing postpartum hemorrhaging to scaling up malaria control. The briefs were used to inform deliberative dialogues between researchers, policy makers, and stakeholders about the use and implementation of evidence in policy making.

When the UK College of Policing What Works Centre for Crime Reduction was established in 2013, it began by running a pilot “Evidence Boot Camp” to engage police officers and staff in crowdsourcing evidence and identifying solutions that work. Participants sifted through an average of 1,133 publications in order to arrive at a collection of about 50 relevant articles and build an initial evidence base.

For the changemaker, what is important is to have a well-worn checklist of contemporary methods for scavenging solutions from the randomistas and contestistas and to marry that process with the skill of engaging the communities we serve in finding and evaluating that evidence. Unlike the private entrepreneur who wants to invent a novel app or device, the public problem solver—whether a public servant or a social innovator—is seeking evidence of what has worked “there” and has a high likelihood to work “here.”