Survey Screening Questions: Good & Bad Examples

Aaron Moss, PhD

The idea behind survey screening questions is pretty simple: to identify people who are eligible for your study you can… just…ask. People who qualify continue to the survey while those who do not are directed out.

Although the idea may be simple, writing effective screening questions is a lot more complicated. Screening questions need to be short and easy for respondents to answer. They should avoid binary response options but also should not include too many response options. You should order the response options in the right way, and the overall screener shouldn’t contain too many questions. Above all, you absolutely need to ensure you don’t lead people to guess which attributes you’re looking for. 

To balance all these objectives—and to demonstrate examples of good and bad survey screening questions—we wrote this blog. Let’s dig in.


Why Use Screening Questions?

As mentioned above, the purpose of survey screening questions is simple. But you might ask: why do I need these questions at all? Why can’t I just find people who I know meet my criteria?

In many cases you can. Nearly all online research platforms collect demographic data from participants in a process known as “profiling.” The data a platform gathers can then be used to “target” participants who hold certain characteristics.

The problem, however, is that platforms could never ask in advance all the questions researchers may want to use when sampling. Some platforms such as Prime Panels allow researchers to screen participants just as they are entering a study, so that those who qualify can continue directly into the study, while those who don’t can be redirected away from that study. Fortunately, we have some advice for how to construct these screening questions. 

*NOTE: Platforms like Amazon Mechanical Turk and other “microtask sites” generally don’t allow survey screeners within a study. This is because participants accept tasks one-at-a-time and compensating people who do not qualify is difficult. Instead of a within survey screener, you can set up a two-step screening process to identify people who meet specific characteristics on Mechanical Turk. After you identify people who meet your criteria in Step 1, you can use CloudResearch’s tools to “include” those workers in Step 2 of your study.


How To Construct Screening Questions: Do’s and Don’ts

1. Maximize Efficiency

Think of your survey screening questions like that old game Guess Who? You want to zero in on your target respondents with as few questions as possible. This means you should start broad and then get specific.

For example, imagine you want to sample people who watched some event on television the night before. Say, a Presidential debate or a major sports championship.

Start with a question asking about general TV viewing habits. Then get more specific by asking people which kinds of programming they like to watch, and finish by asking if they watched the program you are interested in.

The series of questions may look something like this:

  1. How often do you watch TV?

    a Every day

    b A few times per week

    c Once per week

    d Less than once per week

    e Never

  2. What kind of programming do you like to watch? (select all that apply)

    a Live sports

    b News

    c Entertainment

    d Soap Operas

    e Cable news/politics

    f Comedy

    g Documentaries

  3. Did you watch any of these shows or events last night (select all that apply)?

    a National evening news

    b The Presidential debate

    c Live sports

    d Saturday Night Live

    e The Tonight Show

    f Jeopardy

    g Blue Planet

As you can see, it is okay to ask more than one question to refine your sample. But you want to qualify people for the survey typically with four questions or less.

2. Avoid “Yes” or “No” Response Options

It’s tempting to think that a series of questions with “yes or no” response options is an efficient way to zero in on your participants. However, “yes or no” questions have important limitations.

For one, these questions give respondents a 50/50 chance of qualifying for your study even if they are not paying attention! Questions with “yes or no” response options may also lead respondents to say “yes” more often than they say “no” either because they want to appear agreeable or because saying “yes” is just a little bit easier than disagreeing with a statement, a response pattern known as acquiescence bias.

You may occasionally need to ask a yes or no question but avoid relying on these too often and do not use them as the only question to qualify people for your study.   

3. Consider the “Right” Number of Response Options

If two response options are too few, is more always better? Should questions be written with ten or fifteen response options? The answer is that it depends.

Fifteen response options is likely too many for all but the most unusual of circumstances because as you add response options to a question you also decrease the odds that people will select the options that qualify them for the study. In other words, if your criteria are so refined that few people will meet them, you will have trouble filling your sample.

On the other side of the equation, you don’t want to provide people with too few options because doing so can make it easier to guess or answer dishonestly and still qualify for the study. The risk of having too few answer choices is clear with a question like the one asking about which events people watched on TV the night before (see above).

Somewhere between five and seven multiple choice options is often best. However, there are times where you may scale this number up or down depending on your needs.   

4. Don’t Neglect the Order of Response Options

Now that you know how many response options to shoot for, don’t forget to consider how to arrange them. 

Many questions have response options with an assumed order. For instance, a question that asks people how often they watch TV has a clear order.

How often do you watch TV?

a Every day

b A few times per week

c Once per week

d Less than once per week

e Never

For other question types, however, there is no implied order. If, for example, you wanted to ask how people typically watch TV shows and movies.

How do you prefer to watch TV shows and movies?

a On a TV

b On my computer (laptop or desktop)

c On a tablet

d On my phone

e Through a streaming player (e.g., Roku, Chromecast)

When no order is inherent to your response options, consider randomizing or shuffling how they appear to participants. Randomizing answer options reduces sources of answer bias and does a little more to ensure the people you screen actually qualify for your study.

5. Don’t Lead Respondents

Writing good survey questions is both a science and an art. But one basic principle is that questions shouldn’t lead respondents to provide certain answers.

When writing survey screening questions, you want to ensure that your questions do not tip off respondents to the criteria that will help them qualify for the study. For example, if you are looking to recruit parents of young children, you should first establish that people have children before you ask how many or for specific ages. Questions that are loaded with assumptions may bias people’s answers in a certain direction or make it easier for fraudsters to misrepresent themselves.

Possibly Leading Question

How many children do you have?

a None

b One

c Two

d Three

e Four or more

Better Approach

Do you have children under the age of 18?

a Yes

b No

How many children under age 18 do you have?

a One

b Two

c Three

d Four or more

6. Watch for “Maximizers”

Some participants may seek to maximize their chances of qualifying for a survey by selecting several (or all) answer choices within screening questions. Keep an eye out for these maximizers, especially when the odds of someone endorsing more than half of the items in a question are low.

For example, anyone who selects more than half of the answer options in a question asking what they viewed on TV last night can probably be safely omitted from the study as it is unlikely any one person was able to watch so many things in one evening. Combined with in-survey questions, disqualifying maximizers during screening can help you reduce fraud and low-quality responses.


How Screening Affects Incidence Rates

Survey screening questions lead to an important concept that affects the cost of your research: incidence rate. Within market research, the incidence rate is the percentage of respondents who pass your screening questions and go on to participate in your survey.

Because studies with a low incidence rate require screening lots of participants to find just a few who qualify, low incidence rate studies cost more than those with participants who are easier to sample.

The second thing to know about survey screening is what to do with participants who are ineligible for the study. These participants must be redirected to a specific URL because online panels use a standardized set of codes to understand why participants are dropped from studies. You can usually find this URL during the survey setup process.


Finding Qualified Participants with CloudResearch

Regardless of where you recruit participants, finding people who meet your demographic qualifications is only half the battle. The other half is finding people who are willing to invest the time, effort, and attention necessary to provide quality data. At CloudResearch this is where we excel.

We improve data quality from any source with our patented Sentry system. Sentry vets participants with both technological and behavioral measures before they enter your study. Just as survey screening questions seek to identify participants with specific characteristics, Sentry identifies people who are likely to provide high quality data. People who are inattentive, show suspicious signs, or misrepresent themselves are kept from even starting your survey.
To learn more about Sentry and how CloudResearch can help you reach the participants you need, contact us today!

Related Articles

SUBSCRIBE TO RECEIVE UPDATES