(Illustration by Kumé Pather)
In 2008, Cass Sunstein, a professor at Harvard Law School, and Richard Thaler, an economist at the University of Chicago, introduced the concept of “nudges,” light-touch interventions that government officials, policy makers, and public planners could use to steer people to make better decisions, such as saving for retirement or practicing healthier habits. Building on research by behavioral scientists and economists, Sunstein and Thaler argued that “choice architecture,” or how information is framed and how choices are presented, can be constructed in ways that encourage people to move in a particular direction while maintaining their freedom of choice.
Their ideas inspired the UK-based Behavioural Insights Team, which has some 200 “Nudge Units” or research affiliates around the world. Many of these teams work with government agencies, conducting randomized controlled trials (RCTs) to determine whether a nudge will effectively boost vaccination rates, for example, or encourage drivers to pay parking fines. A new paper by Stefano DellaVigna, a professor of economics at the University of California, Berkeley; Woojin Kim, a doctoral candidate in economics at the University of California, Berkeley; and Elizabeth Linos, an associate professor of public policy and management and a behavioral scientist at the Harvard Kennedy School, asks an important question: What happens to the findings after Nudge Units help cities test interventions? Does the gathering of evidence guarantee better outcomes, or are there “bottlenecks” in how cities adopt the findings?
The researchers discovered that many potentially impactful nudges were never implemented and set out to understand why. They found that organizational inertia played a critical role: Government agencies were more likely to incorporate the results of RCTs if they applied to activities they were already doing.
The Behavioural Insights Team opened a North America office five years after its founding to assist local and federal agencies with improving the delivery of government services. “These Nudge Units ran experiments with government agencies to find what was effective for their context,” Kim says, “but then what did they do with the nudge? That was the natural question that brought forth this paper.”
The researchers contacted 30 North American cities that ran 73 RCTs with a Nudge Unit across 67 city departments. Describing their outreach to city officials, Kim says, “We approached them and asked, ‘Remember the experiment you ran to test whether this nudge communication would work in your city? Well, what do you do with it now?’” Cities adopted a nudge in only 27 percent of cases. The researchers wondered what factors were to blame.
Following interviews with personnel in several cities, the researchers developed three models to explain nudge adoption (or lack thereof). First, they tracked whether staff that had carried out an experiment with a Nudge Unit remained in the same role. Second, the researchers assessed city infrastructure such as staffing numbers and resources. Finally, they examined how officials communicated the nudge. Was the nudge delivered in a new letter to taxpayers urging compliance, for example, or in a preexisting form of communication? Testing these three models, the researchers ascertained that the last was most predictive of adoption.
“Our bottom-line finding was that nothing was conclusively predictive, except for whether the communications that delivered the nudge were preexisting or not,” Kim says. When the behavioral insights team collaborated with city officials on building behavioral science into a letter that residents already received each year, the rate of adoption shot up more than 50 percentage points. But if the city opted to send a new letter introducing residents to the nudge, the rate of adoption hovered around 10 percent. In other words, adding an intervention to an existing process made adoption much more likely.
For many cities, investing up front to ensure that adoption takes place remains far from obvious, Kim says. When an experiment produces promising evidence, researchers assume that adoption will take place organically. Instead, the researchers’ findings demonstrate that implementation—incorporating evidence into policymaking and practice—requires deliberate thinking about routines and structures. In recent years, more governments have spent more time and money running experiments to help officials evaluate what interventions work. Alongside the creation of evidence, investigating how adoption will take place now seems just as important.
The researchers “remind us that local government agencies are organizations,” says Jonas Hjort, a professor of economics at University College London. “In doing so, they uncover overlooked barriers to evidence adoption and important new questions we now need to investigate.”
Find the full study: “Bottlenecks for Evidence Adoption” by Stefano DellaVigna, Woojin Kim, and Elizabeth Linos, Journal of Political Economy, forthcoming.
Read more stories by Daniela Blei.
