In the world of human subjects research, Institutional Review Boards (IRB) often conduct a cost-benefit analysis to assess whether a study is ethical. A universal standard applied in these assessments is asking how much risk participants will be exposed to when compared to the things people encounter in everyday life.
When requesters post tasks on Mechanical Turk (MTurk), workers complete those tasks on a first come, first-served basis. This method of task distribution has important implications for research including the potential to introduce a form of sampling bias referred to as the superworker problem.
Amazon Mechanical Turk (MTurk) is a dynamic, ever-changing platform. Each month new people sign up as “requesters” and “workers” and people who once used the platform on a regular basis sometimes stop. This means that any report about MTurk—from how much people make, to data quality, to demographics—is subject to change with time.
To successfully conduct online studies, researchers need to understand the differences between participant platforms, how to maintain data quality, how to recruit the right participants, and how to most efficiently carry out complex projects. The new book Conducting Online Research...
During the last decade, academic researchers have increasingly turned to the Internet as a fast and efficient way to recruit research participants. The most commonly used platform, by far, has been Amazon’s Mechanical Turk (MTurk). The popularity of MTurk among researchers from several disciplines has caused concern that MTurk may be oversaturated.
The American workplace has changed a lot since the 1960s, but at least one thing remains the same: men often earn more money than women. This gender wage gap has lingered for decades despite increased public attention and legislative focus. Today, women earn about 20% less than men (1,2).