More than people realize, science is often shaped by the tools at hand. For those who study human behavior, these tools have changed rapidly in the last ten years. Social scientists today are as likely to turn to the Internet when looking for people to complete studies as anywhere else, and the effects of this change have been profound.
Conducting Online Research on Amazon Mechanical Turk and Beyond—written by Leib Litman and Jonathan Robinson, with contributions from Jesse Chandler, Gabriele Paolacci, David Hauser, Michael Hall, and Neil Lewis, Jr.—is part of SAGE’s Innovations in Research Methods series and aims to be a resource that helps researchers navigate the new online environment. Equal parts guide for beginners and informative content for experts, the book contains an overview of the online participant platforms most commonly used by social scientists. In addition, there are guides for getting started with online research and several detailed discussions about the factors that determine the success of online research projects.
Using CloudResearch data from tens of thousands of online studies conducted over several years, the book explores topics like data quality, the demographics of MTurk participants, sources of sampling bias in online studies, the generalizability of findings from MTurk, and the ethics of online studies. A primary theme of the book is to help researchers see how market research panels differ from Mechanical Turk and how various sources of online participants can be used to complement one another.
With its combination of practical advice and discussion of important theoretical concepts, Conducting Online Research on Amazon Mechanical Turk and Beyond will help students and those new to online research understand the landscape and get started running their first projects. It will also interest and engage even the most experienced online researchers, helping everyone conduct more effective online research projects.
You can order the book, read a preview, or view a detailed Table of Contents below. As content changes to the book occur, we will make those updates available here as well.
The first chapter provides a historical overview of online platforms, focusing on Mechanical Turk and its popularity among academic researchers. The chapter also introduces market research panels and explains how these panels are becoming increasingly accessible to academic researchers.
Chapter 2 provides an overview of Mechanical Turk, describing the MTurk ecosystem from the perspective of both requesters and workers. The chapter outlines features the MTurk ecosystem has developed that help workers and requesters get the most out of their MTurk experience, covering topics such as MTurk’s reputation mechanism, and worker forums and scripts. An invaluable aspect of the chapter is the voice of MTurk workers who were asked to read the chapter and provide comments about the MTurk experience.
Chapter 3 provides an introductory discussion of platforms researchers can use to develop surveys, experiments, and other online studies. The chapter also serves as a guide, showing researchers how to conduct a study on Mechanical Turk step-by-step. Readers can follow along as the authors replicate a previously published study. Finally, the chapter discusses several best practices for MTurk studies.
Chapter 4 introduces Mechanical Turk’s application programming interface (API) and explains how third-party apps can make Mechanical Turk research more effective by accessing the API. Much of the chapter focuses on CloudResearch (formerly TurkPrime) and how CloudResearch enables social scientists to get more out of Mechanical Turk.
Chapter 5 deals with data quality in online platforms. Adopting a theoretical perspective, the chapter asks what makes data high or low in quality. Then, the chapter discusses specific threats to data quality in online studies and the best practices for mitigating these threats.
Chapter 6 draws on data from CloudResearch to present a comprehensive picture of the demographic composition of Mechanical Turk. The chapter covers topics such as the overall size of the MTurk population, how often people turnover on the platform, and who is responsible for completing most HITs. In addition, the chapter discusses the geographic distribution of workers and how well workers’ basic demographics line up with the U.S. population. The chapter also deals with the reliability and validity of MTurk’s demographic data.
Chapter 7 discusses sampling, focusing on standard MTurk practices, the bias that such practices engender, and ways to avoid such bias. Topics covered include: time of day bias, bias based on participant experience, and how the pay offered for a study can bias a sample.
Chapter 8 discusses the representativeness of data collected on non-probability samples, including Mechanical Turk. How well do the results obtained on MTurk replicate with other samples? Which studies are well-suited to Mechanical Turk which studies are not? The focus of this chapter is on applying the “fit-for-purpose” framework to online research and helping researchers think through issues of online sampling.
Chapter 9 describes best practices for conducting longitudinal research by presenting multiple case studies. Platforms like Mechanical Turk make longitudinal research more feasible by removing several burdens inherent in face-to-face studies. The chapter describes how to conduct a longitudinal study and minimize attrition.
Chapter 10 provides an overview of market research platforms and discusses the advantages and disadvantages of using Mechanical Turk over other platforms. Once again, part of this chapter focuses on helping researchers understand when a study is fit for one platform or another.
Finally, Chapter 11 discusses the ethics of conducting research on Mechanical Turk and other online platforms. The chapter situates online research within the historical context of human subjects research and presents data to inform the ethical practices of researchers using Mechanical Turk.