In the world of human subjects research, Institutional Review Boards (IRB) often conduct a cost-benefit analysis to assess whether a study is ethical. A universal standard applied in these assessments is asking how much risk participants will be exposed to when compared to the things people encounter in everyday life.
Amazon Mechanical Turk (MTurk) is a microtask platform that has been used for social science research for nearly a decade. During this time,
Early March marked a historic turning point in the United States. Despite pockets of outbreak on the West Coast and a clear acceleration of infection across Western Europe, most Americans still regarded the novel coronavirus as a distant threat. The threat abruptly became proximal when, on March 2, the first community acquired case of COVID-19 was confirmed in New York State.
Amazon’s Mechanical Turk (MTurk) is a microtask platform that was launched in 2005 to help computer scientists and tech companies solve problems that do not have computer-based solutions (Pontin, 2007). Yet, as MTurk grew, behavioral researchers realized they could use it to access tens of thousands of people from across the world.
When requesters post tasks on Mechanical Turk (MTurk), workers complete those tasks on a first come, first-served basis. This method of task distribution has important implications for research including the potential to introduce a form of sampling bias referred to as the superworker problem.
Data collection online has become standard practice even for major institutions like the CDC, but unless care is taken to ensure subjects are honest and attentive the results can be very misleading.