Early March marked a historic turning point in the United States. Despite pockets of outbreak on the West Coast and a clear acceleration of infection across Western Europe, most Americans still regarded the novel coronavirus as a distant threat. The threat abruptly became proximal when, on March 2, the first community acquired case of COVID-19 was confirmed in New York State.
Amazon’s Mechanical Turk (MTurk) is a microtask platform that was launched in 2005 to help computer scientists and tech companies solve problems that do not have computer-based solutions (Pontin, 2007). Yet, as MTurk grew, behavioral researchers realized they could use it to access tens of thousands of people from across the world.
When requesters post tasks on Mechanical Turk (MTurk), workers complete those tasks on a first come, first-served basis. This method of task distribution has important implications for research including the potential to introduce a form of sampling bias referred to as the superworker problem.
Data collection online has become standard practice even for major institutions like the CDC, but unless care is taken to ensure subjects are honest and attentive the results can be very misleading.
Amazon Mechanical Turk (MTurk) is a dynamic, ever-changing platform. Each month new people sign up as “requesters” and “workers” and people who once used the platform on a regular basis sometimes stop. This means that any report about MTurk—from how much people make, to data quality, to demographics—is subject to change with time.
To successfully conduct online studies, researchers need to understand the differences between participant platforms, how to maintain data quality, how to recruit the right participants, and how to most efficiently carry out complex projects. The new book Conducting Online Research...