By Shalom Jaffe, Rachel Hartman & Leib Litman, PhD
Writing academic papers is hard—you need to explain what motivated your study, what question you tried to answer, how you tested your ideas, what you found, and what it all means. Inevitably, you will send your paper out for review, wait months for the response, and then receive comments that may or may not look so kindly on the work you did.
Well before you get to that point, however, you have to find a way to begin writing. Many people like to start by drafting the methods and results sections. Because these sections are relatively straightforward and provide a description of what was already done, it’s an easier way to notch a “win” before tackling the introduction and discussion.
But easier still isn’t easy! To help anyone starting a methods section, we’ve put together this blog highlighting the types of samples you can gather from CloudResearch and how you might accurately describe them within your paper. Hopefully, a quick read of this blog gets you ready to start writing.
You can access participants from two sources when using CloudResearch: Mechanical Turk (MTurk) and Prime Panels.
MTurk is a microtask platform that connects “requesters” (people who need tasks completed) with “workers” (people willing to do the tasks). Academics have been using MTurk for research since about 2010.
CloudResearch connects to MTurk through API integration. With this integration, researchers can use CloudResearch to set up and manage studies but still draw participants directly from MTurk. The suite of tools that CloudResearch makes available for using MTurk is called the MTurk Toolkit. The Toolkit is built on top of the Mechanical Turk platform.
With CloudResearch’s MTurk Toolkit, there are two ways to sample from MTurk. Your study may be:
How you describe your MTurk sample in your Methods section is important. Like with any other sample, there are some details you should be sure to describe and some you can safely omit. The examples below illustrate what an accurate sample description may look like.
This example is for a fictional study but it contains all the important elements of a sample description. It reports the important demographic information of participants, describes the sampling procedure and sampling settings, and it tells readers when the data were collected and how participants were compensated.
The second example is from a paper by Gratz et al., (2020) published in Suicide and Life‐Threatening Behavior.
Give some background about MTurk: Even though online participant recruitment is becoming the new norm, some reviewers are unfamiliar with the literature that documents the rise of online research. If you’re looking for sources to support your use of MTurk or approach to sampling, our book is a comprehensive guide to online sampling and data collection, with a focus on MTurk. You might also consult our papers describing CloudResearch’s MTurk Toolkit and how to sample naive people on MTurk.
For a review of common concerns and the evidence that does or doesn’t exist to support these concerns, see Hauser et al., (2019).
Mention if you used the Approved List: A major change to CloudResearch in the last year has been the introduction of our Approved Group. If you use the Approved List to gather data you should mention it, because this is a group of participants that can only be accessed via CloudResearch. The Approved Group vastly improves data quality over open MTurk samples, consists largely of naïve participants, and has overall demographics that are similar to the broader MTurk population. See here for more details.
Don’t confuse CloudResearch with Mechanical Turk: CloudResearch and Amazon Mechanical Turk are independent companies. As described above, CloudResearch’s MTurk Toolkit sits on top of MTurk and enables researchers to run flexible studies while drawing participants from MTurk.
Don’t confuse panels with MTurk: CloudResearch profiles workers on MTurk by gathering voluntary demographic data. A group of people who meet specific demographic criteria is sometimes called “a panel.” A panel of MTurk workers, however, is not the same as sampling from Prime Panels (see more below). The language here can get confusing, so it’s best to just describe specific demographic filters that you used.
Prime Panels is a participant recruitment platform that aggregates several market research panels integrated with the highest quality control system in the industry, SENTRY®. Prime Panels offers access to tens of millions of people worldwide and is especially useful for gathering samples that are more representative of the US population (similar to Qualtrics Panels, Dynata, or Lucid) or not available on microtask sites like MTurk. For example, with Prime Panels it is easy to gather data from participants matched to the US Census, those within specific US zip codes, or those in minority or hard-to-reach groups.
Here are two examples of how to accurately describe participants recruited from Prime Panels in a Methods section. The first example is, again, a fictional paper sampling adults across several age ranges.
The second example is from a paper by Kroshus et al. (2020) published in JAMA Pediatrics:
Give background on Prime Panels: If MTurk still sounds like a novel way to collect data to some reviewers, market research panels are even newer. Therefore, it is important to provide some background about market research panels and how they are being used in academic research. Good sources for this information are the tenth chapter of our book and Chandler et al., 2019.
Don’t report the cost per participant as compensation: Compensation details are much more complicated for market research panels than for MTurk. You can find more information on how participants are compensated here. Reporting the cost per participant as the compensation participants receive is not accurate.
It’s always rewarding to see others find our resources useful. As a token of our appreciation, when you publish an article using CloudResearch samples and cite us you’ll be eligible for a $50 lab credit toward your next study! To qualify for this offer you can cite any of our publications about our tools, but these two tend to be the most useful.
When using our MTurk Toolkit:
Litman, L., Robinson, J., & Abberbock, T. (2017). TurkPrime.com: A versatile crowdsourcing data acquisition platform for the behavioral sciences. Behavior Research Methods, 49(2), 433-442. https://doi.org/10.3758/s13428-016-0727-z
When using Prime Panels:
Chandler, J., Rosenzweig, C., Moss, A. J., Robinson, J., & Litman, L. (2019). Online panels in social science research: Expanding sampling methods beyond Mechanical Turk. Behavior Research Methods, 51(5), 2022-2038. https://doi.org/10.3758/s13428-019-01273-7