Collecting quality data begins with selecting the right participants. A phrase common in many research circles is that participant selection should be “fit for purpose.” This means the participants a researcher selects for a study should be well suited for finding answers to the questions that motivated the study.
However, data quality does not end with participant selection. As the study progresses, researchers also need to be aware of different threats to data quality and should know how to manage these threats through data analysis. So let’s look at some threats to data quality and practical ways to minimize their impact on your data.
Ensuring data quality is a continuous process that evolves as a research project progresses. After researchers select the right participants and the right measures for their study, there are several ways to ensure participants are attentive, honest and engaged with the study during data collection.
Selecting the right participants for a research project is a complicated decision. Often, the decision is influenced by the research question and by convenience. But when researchers are selecting among different sources of online participants, it is worth considering how different platforms may help or hinder the pursuit of quality data.
Some platforms, like Mechanical Turk (MTurk), allow researchers to get greater participant engagement than other online panels. This means researchers should consider whether the length of their study, the nature of the task and the level of commitment they are seeking from participants are well suited to the participant recruitment platform they decide to use.
How can researchers ensure participants remain attentive during online studies? One way is to design interesting studies. However, even if the content of a study isn’t naturally engaging, researchers can ensure the study is well designed, instructions are clear, and that participants understand the importance of paying attention and responding honestly.
Beyond study design options, researchers can include multiple brief attention check questions. During data analysis, participants’ success on the attention check questions should be used to help evaluate the quality of data and to decide whether to remove participants from the dataset.
Participants in online panels sometimes falsify demographic information, especially when doing so allows them to qualify for studies with high rewards. To discourage this practice, researchers can selectively target participants who have been previously profiled, either by the panel provider or, if using MTurk, by the researcher.
Dissociating the demographic screener from the study removes the benefit to the participant of providing false information and increases the likelihood that researchers will receive accurate demographic information. Researchers should always verify demographic information by also asking participants to report important demographics within the study.
People participating in online studies are human; given the impersonal nature of online data collection, some of them may attempt not being truthful or try to deceive researchers.
Potential forms of deception in online studies include participants using the internet to look up the answers to knowledge-based questions as well as participants who are not truthful about their demographic information in order to qualify for studies recruiting specific populations.
Some online platforms, like MTurk, allow researchers to selectively sample participants with specific levels of experience on the platform. However, even when this is not possible, researchers may ask participants if they have been exposed to similar manipulations and measures as those used within the study, even if the wording is not the same.
Some online platforms allow people from all over the world to participate in studies. Although researchers can sometimes selectively recruit participants from specific countries, ensuring language comprehension is another effective way to improve data quality. On platforms like MTurk, researchers can create a qualification for people who pass a language comprehension test. In other online platforms, researchers may set up a screener to determine participants’ language comprehension before participants may enter the survey.
At CloudResearch, we regularly conduct research to understand the dynamics of different participant recruitment platforms. Our research is published in leading academic journals and can be used to help researchers make informed decisions about the platform best suited to the needs of their research project.
The validity of attention check questions is a rapidly evolving area of research and one where some assumptions are being overturned by evidence. At CloudResearch, we help researchers make sense of this changing information. We regularly evaluate different methods of assessing participant attention and look for ways to combine traditional assessments of attention with technology that measures what participants are doing while taking studies.
Verifying online respondents’ demographic information is difficult because researchers do not interact with participants. Nevertheless, researchers can use certain methods to establish confidence in people’s demographic information. For example, at CloudResearch, we profile workers on MTurk by randomly asking demographic questions over time and examining the consistency of each person’s responses. People who consistently provide the same demographic information — when there is no incentive to not be truthful — are likely telling the truth.
On other online platforms, participants are profiled when they join the platform. Then, over time, people are regularly asked follow-up questions to verify their demographic information.
Take the difficulty out of ensuring data quality by working with the experts at CloudResearch. At CloudResearch, we understand that maintaining high-quality data in online studies can be a difficult task. That’s why we do a lot of the work for you. We have developed patent-pending technology that makes sure inattentive participants don’t make it into your study in the first place. Before your study launches, CloudResearch engages with every participant using a peer-reviewed and patent-pending methodology to prevent inattentive participants from joining your study. The result is a cleaner dataset with fewer data quality problems. Because we have pioneered this approach and our platform is the only one in the industry using this methodology, our data quality is unparalleled.
The CloudResearch Guide to Data Quality, Part 2:How to Identify and Handle Invalid Survey Responses As a researcher, you are aware that planning studies, designing materials and collecting data each...Read More >