What Went Wrong With 2020 Presidential Election Polls?

Leib Litman, PhD

By Leib Litman, PhD, Shalom Jaffe, Cheskie Rosenzweig, MS, Aaron Moss, PhD & Jonathan Robinson, PhD

Shortly after it became clear that the Presidential election would be closer than polls forecast, many people began asking: how did the polls underestimate support for President Trump again? Although the undercount of Trump support came as a surprise to many, it was somewhat less surprising to us. 

For months, we’ve been studying people’s willingness to participate in phone polls and to tell pollsters who they support truthfully. What we’ve found is a significant number of so-called “shy voters.” In online polls we conducted from August through October—which collectively totaled nearly 4,000 voters—we found 12% of Republicans said they’re reticent to truthfully share their voting intentions in phone polls, more than double the 5% of Democrats who expressed the same sentiment. Additionally, many voters say they simply don’t want to be bothered by pollsters. Nearly four in ten people say they would not answer the phone if they knew it was from a pollster, with Republicans even less likely than Democrats to take the call. 

This “shy voter” effect was even greater in key battleground states, such as Pennsylvania, where 16% of Republicans and 10% of Democrats said they wouldn’t honestly answer polls conducted over the phone.

What’s behind the “shy voter” phenomenon? Many voters say it’s due, in part, to today’s political divisiveness. According to our research, the tense political climate has led a growing number of Americans to avoid polls out of fear that their political leanings may become public. Voters expressed concern that their conversations with pollsters might be recorded, opening themselves to potential harassment if their political views became widely known in their communities.


Are “Shy Voters” Affecting Election Poll Accuracy?

Pollsters have known for decades that people are increasingly unwilling to participate in polls—an issue referred to as non-response. An even more serious problem occurs when some groups of people are systematically less likely to participate than others. That’s what happened in 2016 when White voters without a college degree were significantly less likely to participate in polls, resulting in forecasts that underestimated support for President Trump. 

Even though pollsters thought they fixed this problem heading into the 2020 election, correcting for non-response in one way doesn’t protect polls from systematic non-participation among groups of voters in other ways. 

To better understand the issue, consider the following. Some people—those in racial minority groups, those with less education and socio-economic standing, those who are younger, and those who are less politically and civically engaged—are all less likely to participate in polls than others. The differences in participation across demographic groups presents a grave issue for polls because it violates the foundational requirement of a probability poll: that every person in the population has an equal chance of being in the poll.

To address this problem, pollsters use methodological and statistical techniques that correct for non-response bias. Those techniques are, however, not perfect. And, sometimes the corrections lead to other types of bias. 


Why Weighting Isn’t Enough to Correct for Missing Data

Correcting for non-response bias is simple, in theory. Imagine that people with a college degree participate in polls at twice the rate as people without a college degree. Pollsters can correct this problem by counting people without a college degree twice, a technique known as weighting. Polls commonly weight their samples by multiple variables in order for the final sample to approximate the US population. 

While weighting helps correct for bias in who pollsters are able to reach, weighting cannot correct for sources of bias that are not included in a polling model. In other words, you can’t adjust for what you don’t measure or can’t anticipate. So, when a lack of participation in polls is driven by fear in a politically divisive environment or other factors pollsters don’t account for—as our data indicates happened—non-response becomes particularly difficult to correct (see here for a more in-depth discussion about problems with weighting).  


How to Improve Polling Going Forward

Correcting for the biases outlined above depends first on admitting there’s a problem, but the solution involves more than that. For polling to be more accurate and more informative, pollsters need to do a better job incorporating social science into the polling process. Polls are in the business of counting, but social scientists are in the business of understanding people’s thoughts and feelings with careful measurement. 

CloudResearch’s research into poll respondent honesty is an example of this social science-focused approach. We created measures that assessed people’s fears and anxieties, collected in-depth, open-ended essay responses to more deeply understand how people think, and designed our studies to reveal the reasons people behave the way they do. The difference between polls and social science may be subtle, but it is critical if we want to better understand human beings and predict future behavior.

Political polls have long influenced the policy positions of political candidates and elected leaders by providing information about what is important to voters. Polls today may be more important than ever as news networks and political parties have used them to decide which candidates will participate in debates and even how many questions candidates will be asked in those debates. Unless pollsters correct for recent mistakes, polls could become increasingly viewed as partisan tools used solely to boost the prospects of specific candidates. That would further erode public confidence in the polling process and perhaps even our democratic system of government.

Related Articles

SUBSCRIBE TO RECEIVE UPDATES