Study: Amazon MTurk Remains a High-Quality Source of Research Participants

Aaron Moss, PhD

By Aaron Moss, PhD, Cheskie Rosenzweig, MS, & Leib Litman, PhD

During the last decade, academic researchers have increasingly turned to the Internet as a fast and efficient way to recruit research participants. The most commonly used platform, by far, has been Amazon’s Mechanical Turk (MTurk). The popularity of MTurk among researchers from several disciplines has caused concern that MTurk may be oversaturated. Specifically, researchers worry that the average MTurk participant has been exposed to common measures and manipulations hundreds or thousands of times, making people on MTurk less than ideal research participants.


By the Numbers: Tapped Out, or Barely Tapped?

In a new study published in PLOS One, our team at CloudResearch challenges the idea that MTurk is no longer useful for academic research. Specifically, we used metadata from thousands of studies run on the CloudResearch platform and conducted two experiments to show that:

  • there are over 50,000 new workers joining MTurk each year
  • new workers provide quality data
  • researchers can sample new and inexperienced workers by changing sampling practices

Despite MTurk’s popularity, one reason researchers worry about overusing participants is that it’s unclear how many people are on MTurk to begin with. Using data from multiple years, our paper shows there are ~86,000 U.S. workers on MTurk each year. Furthermore, each year, more than half of these participants are new to the platform. The existence of so many new participants each year poses a bit of a puzzle. How can most studies re-sample the same participants if thousands of new participants join the platform every month? 

Although several factors contribute to a study’s sample composition, one overlooked factor is the way researchers sample from MTurk. Our research shows that the worker qualifications commonly used by researchers almost guarantee they will sample experienced workers because inexperienced workers do not meet these qualifications. To test whether the standard qualifications are necessary to ensure data quality, we conducted two experiments comparing inexperienced workers to workers sampled with standard qualifications.


What Does This Mean for Researchers?

Across both studies, we found that inexperienced workers provided data quality similar to the most experienced workers. In addition, even though both groups yielded large effect sizes on several classic experiments, inexperienced workers reported significantly less previous exposure. Our findings show that sampling workers who are new to the platform is a way to gather quality data while sampling naive research participants.

In the paper, we recommend that researchers use sampling practices that are suited to the purposes of their research study and state that: “Targeting inexperienced workers will significantly increase the available pool of MTurk workers, mitigate the super-worker problem, and help solve the problem of non-naivete all while allowing researchers to benefit from the advantages that originally made MTurk an attractive source of research participants.”  

Related Articles

SUBSCRIBE TO RECEIVE UPDATES