The experimental study of campaign effects began approximately a century ago when Harold Gosnell, a pioneering political scientist at the University of Chicago, conducted controlled experiments to measure the effects of get out the vote (GOTV) mailings on turnout. With a few notable exceptions, however, Gosnell’s prescient research design—in which households were assigned either to be sent a campaign mailing or not, and in which turnout effects were measured using the post-election voter turnout rolls—was neglected for decades.

Instead of conducting field experiments, researchers most commonly relied on surveys to measure the effect of campaign exposure on voter attitudes and turnout. In these survey-based studies of campaign effects, people were asked about campaign exposures, and this reported exposure was correlated with political outcomes, such as turnout or candidate choice. These surveys were often conducted using carefully constructed national samples that were selected to be representative of the electorate, but the research design was poorly suited for isolating the causal effect of campaign activity such as get out the vote efforts. A key problem was that those who were targeted by campaigns, and those who recalled and reported campaign exposures, tended to be those who were most politically engaged. Consequently, survey-based research frequently produced estimates interpreted as very large campaign effects on turnout (in the 10 percentage point range or larger), but there was a real possibility that the large correlations between reported exposure and voting were really picking up differences in who candidates targeted or who reported exposure rather than the true causal effect of campaign activity.

Increasing Voter Turnout: It’s Tougher Than You Think
Increasing Voter Turnout: It’s Tougher Than You Think
In this 15-part series, election experts from government, academia, and the private and nonprofit sectors will weigh in on important questions, including: What can the social sector do to improve voter turnout in the United States?

Beginning around 2000, real world randomized experiments, which avoid mistaking spurious correlations for causal effects, moved to the center of the study of voter mobilization. Since the revival of field experimental study of campaign effects, there have been hundreds of studies measuring the effect of GOTV tactics on turnout. A recent meta-analysis by Columbia University professor Donald Green and Yale University professor Alan Gerber (one of this article’s authors) summarizes the findings of approximately 200 experimental studies. There are two key takeaways from that study: It is quite challenging to increase turnout, and commonly used interventions produce effects on turnout in the low single digits.

Here are some of the more interesting results of that study about the effectiveness of voter mobilization. After pooling the results of 51 canvassing experiments, it’s estimated that contact with a canvasser increases turnout by 4.3 percentage points. In a campaign, however, not every door-knock yields an interaction with the resident. If a typical canvassing campaign manages to interact with 25 percent of its targets, then the overall effect on the target group’s turnout is 1 percentage point or less. Further, this turnout boost from successful contact varies across political contexts; in a very high turnout election, such as a presidential election, the expected return to a successful contact falls even lower. In contrast, when turnout is expected to be 50 percent or less, the expected effect rises somewhat.

Mailings and phone calls produce even more modest returns. A typical non-partisan mailer increases turnout by less than 0.5 percentage points, and being reached by a commercial phone bank for a GOTV call boosts turnout by less than 1 percentage point. In contrast, a successful completed volunteer call appears substantially more effective, raising turnout by nearly 3 percentage points.

A question that naturally arises is whether the result varies depending on what is communicated during the GOTV effort. It turns out that it does. Appeals that exert some degree of social pressure to participate in voting, through methods such as very strong appeals to the social norm of voting, presentation of information from the turnout rolls, information about turnout levels in the voter’s neighborhood, or mention of the practice of recording turnout information in administrative records, increases the effectiveness of GOTV efforts.

Despite the voluminous prior research, few recent studies have focused on primary elections, developed novel messages for bringing new voters to the polls, or considered alternative messaging strategies in the same political context. A recent grant from the William and Flora Hewlett Foundation supported a large-scale GOTV mail and phone field experiment that allowed us to make initial progress in addressing these gaps in the literature. Key design features of this new work include the simultaneous use of multiple messages and assessment of outcomes in many different electoral contexts.

Consistent with prior work suggesting turnout effects are somewhat elevated in low turnout contests, a generic GOTV mailing increased turnout by slightly less than 1 percentage point. Although somewhat exploratory, some messages appear to be especially effective; for example, messages that emphasized the social norm of voting (what sort of person does or does not vote) were twice as effective as a basic GOTV mailing, and some messages that focused on why primary voting was especially important seemed more effective than standard GOTV communications.

To summarize, the most basic finding from the experimental literature on campaign activity is that interventions such as a mailing, a call, or even a visit have modest effects on turnout, and that in high turnout elections such as presidential general elections, increasing turnout even a few percentage points is a significant challenge.

Our deepening understanding of what sort of effects we might expect from campaigns suggests an important general lesson; it illustrates the central role of research design in measuring the impact of interventions. There is a natural tendency to seek to improve our measurement through gathering more data on a question. However, this strategy has some important limits. In the case of survey research on turnout, for example, gathering more data will permit you to get a more and more precise estimate of the correlation in the quantities that the survey is eliciting. But if this correlation is just an association and you are interested in a causal effect, increasing N will get you a more and more precise estimate of the wrong thing. The decision to embrace randomized field experiments is what has allowed us to overcome this central barrier to building knowledge in understanding how to increase participation.

More bluntly, it is often the absence of a credible research design, rather than a lack of data, that prevents informative program evaluation. This is not just a lesson for understanding how to get people to vote, but also for all sorts of programs, whether they aim to increase civic engagement, diminish polarization, improve educational performance, or address deficiencies in the criminal justice system. If researchers (and those who fund them) do not make sure research designs that allow learning are included in these programmatic efforts from the beginning, then no amount of data will allow us to learn from such efforts.

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Alan Gerber & Gregory A. Huber.