How Non-Response Bias Can Affect Research Surveys

Aaron Moss, PhD

An Example of Non-Response Bias in Action: 2020 Presidential Polls

After the 2020 U.S. Presidential election, an experienced pollster named David Miller thought he knew why the polls underestimated support for Donald Trump. He described his view in the Washington Post, writing that:

“I conducted tracking polls in the weeks leading up to the presidential election. To complete 1,510 interviews over several weeks, we had to call 136,688 voters. In hard-to-interview Florida, only 1 in 90-odd voters would speak with our interviewers. Most calls to voters went unanswered or rolled over to answering machines or voicemail, never to be interviewed despite multiple attempts.”

In summarizing the problem, Miller wrote, “… we no longer have truly random samples that support claims that poll results accurately represent opinions of the electorate. Instead, we have samples of “the willing,” what researchers call a “convenience sample” of those consenting to give us their time and opinions.”


What is Non-Response Bias?

What Miller diagnosed was non-response bias—a problem that arises when the people who respond to your poll or survey are different than those who do not. Although there are many reasons why some people may respond to a survey and others don’t, the problem that non-response bias creates is that the people researchers end up studying are systematically different than those who are not represented in the study.

Pollsters have known for decades that people are increasingly unwilling to participate in polls and about the problems this can create. In 2016, for example, White voters without a college degree were especially unlikely to participate in polls. The result was that many forecasts seriously underestimated support for President Trump. Even though pollsters thought they fixed this problem heading into 2020, polls in 2020 also underestimated support for President Trump, likely because voters low in social trust avoided the polls. As the last two Presidential cycles show, correcting for one form of non-response bias doesn’t necessarily protect polls from other forms of systematic non-response.      


How Does Non-Response Bias Affect Online Surveys?

It’s easy to see how non-response bias affects traditional polling. Because polling relies on random sampling, anything that systematically affects who participates in the poll can lead to biased results. Online polling has many challenges too. 

But most behavioral research is not conducted with random sampling. In fact, with most online surveys, researchers know they are drawing from convenience samples. Studies that aim to make frequency claims about a population, like a poll, are the most sensitive to non-response bias (see an overview of issues with online polling here).

However, when researchers are interested in learning about the association between variables (are people with higher incomes interested in a new product?) or testing the effect of experimental manipulations (does webpage layout A or B cause people to spend more time on the page?) non-response bias is less of a threat. And, for any type of online research study, there are multiple ways to reduce non-response bias. 


8 Methods to Avoid Non-Response Bias in Online Surveys

  1. Set Demographic Quotas: One active way to manage non-response is with quotas that control sample composition. Quotas allow researchers to ensure that participants within several important or hard-to-reach groups are represented in the research. Within online polls and other research that aims to generalize to the U.S. population it is standard practice to apply quotas matched to the U.S. Census. 
  2. Use Post-Survey Weighting: Another way to address non-response bias is to weigh the data after it is collected. Weighting is common when researchers struggle to sample everyone within a population but want to do as much as possible to make the results representative.
  3. Manage Study Length: Although not strictly “non-response,” long studies experience more participant dropout than shorter ones, increasing the possibility for bias. Making studies only as long as they need to be and minimizing the strain on participants is a good way to minimize bias. 
  4. Monitor Dropout: Dropout threatens the validity of a study, especially when participants are more likely to drop out of one condition in an experiment than another. Anytime you see high dropout or uneven dropout, it is worth investigating. 
  5. Consider Compensation: In many online platforms, participants are more willing to engage in long, complicated, or boring studies when they are compensated for doing so. Providing adequate compensation for the difficulty of the task reduces non-response bias. 
  6. Avoid Leading Study Advertisements: When advertising or naming an online study, it is best to use a vague name, such as “psychology study.” Doing so avoids bias in who is interested in the study. 
  7. Provide Study Reminders (When Possible): Some studies require participants to provide data over multiple waves or timepoints. In these longitudinal or tracking studies, reminding participants about each wave of data collection and providing as much opportunity to participate as possible helps reduce non-response across waves.
  8. Warn Participants about Downloads and Other Unusual Components: Some studies require participants to download reaction time software or to turn on their camera while completing a task. While this may be what the research requires, not all participants are willing to do it. Providing participants with information about these study specific issues before they agree to participate reduces attrition within the study. Even so, with some study designs there is a significant chance for non-response.  

None of the actions above are a perfect remedy for non-response bias, and some actions apply to some types of research more than others. Nevertheless, taking the time to think about whether each action above applies to your research project is worth the effort because minimizing non-response bias leads to more accurate and precise research findings.

Related Articles

SUBSCRIBE TO RECEIVE UPDATES