How to Cite CloudResearch in Your Journal Articles

CloudResearch

By Shalom Jaffe, Rachel Hartman & Leib Litman, PhD

Writing academic papers is hard—you need to explain what motivated your study, what question you tried to answer, how you tested your ideas, what you found, and what it all means. Inevitably, you’ll send your paper out for review, wait months for the response, and receive comments that may or may not look kindly on the work you did.  

Before you get to that point, however, you have to find a way to begin writing. Many people like to start by drafting the methods and results sections. Because these sections are relatively straightforward and provide a description of what was already done, it’s an easier way to notch a “win” before tackling the introduction and discussion.

But easier still isn’t easy! To help anyone starting a methods section, we’ve put together this blog highlighting the types of samples you can gather from CloudResearch and how you might accurately describe them within your paper. Hopefully, a quick read of this blog gets you ready to start writing.


You can access participants from three sources when using CloudResearch: Connect, Mechanical Turk (MTurk) and Prime Panels.

How to Describe Your Connect Sample

Connect is CloudResearch’s premier platform for online participant recruitment. Unlike our other products, participants on Connect are sourced directly by CloudResearch, without any third party, to ensure only people who provide high-quality data participate in your tasks. To learn more about Connect see our FAQ

You can describe samples on Connect just as you would any other source of online participants. Make sure to include any settings you may have used to target your sample and information about the study’s length and payment. Here’s a fictional example:

  • We recruited 1,500 adults from Connect, an online source of high-quality participants by CloudResearch. We paid respondents $2 for a survey we expected to take about 15 minutes. Respondents were eligible for a bonus of up to 50 cents if they provided additional text responses in the survey.  All respondents were 18 years and older, and were U.S. residents.

This next example is from a paper by Moss et al. (2023), published in the Journal of Experimental Social Psychology:

  • A total of 384 U.S. adults (Mage = 41.28, SD = 12.72, 141 men, 239 women, 4 non-binary/other) from CloudResearch’s Connect platform participated in an experiment we expected to take five minutes. Participants were paid $1.25. We excluded 29 participants who reported their race as something other than monoracial White or Black, leaving 177 White people and 178 Black people in the final sample.

For additional articles using Connect, see here, here, here, here, here, and here


How to Describe Your MTurk Sample

MTurk is a microtask platform that connects “requesters” (people who need tasks completed) with “workers” (people willing to do the tasks). Academics have been using MTurk for research since about 2010.

CloudResearch connects to MTurk through API integration. With this integration, researchers can use CloudResearch to set up and manage studies but still draw participants directly from MTurk. The suite of tools that CloudResearch makes available for using MTurk is called the MTurk Toolkit. The Toolkit is built on top of the Mechanical Turk platform.

With CloudResearch’s MTurk Toolkit, there are two ways to sample from MTurk. Your study may be:

  1. Open to everyone – If you don’t use CloudResearch’s data quality tools, your study is open to everyone on MTurk. The sample you access is identical to what you would get through MTurk itself.
  2. Open to the CloudResearch Approved List – To improve data quality on MTurk, CloudResearch has created our Approved Participants group. Our Approved Participants group is a high-quality subsample of the MTurk population that can only be accessed viaCloudResearch’s MTurk Toolkit. Our Approved Participants group significantly improves data over sampling just from MTurk and does so without harming demographic representation; it’s recommended, and it’s on by default.

Examples of MTurk Sample Descriptions

How you describe your MTurk sample in your Methods section is important. Like with any other sample, there are some details you should be sure to describe and some you can safely omit. The examples below illustrate what an accurate sample description may look like.

  • We recruited a sample of 500 U.S. women (Mage = 41.85, SD = 8.13) from Amazon Mechanical Turk using CloudResearch (formerly TurkPrime; see Hauser et al., 2022; Litman et al., 2017; Litman & Robinson, 2020). We used CloudResearch’s Approved Participants (Hauser et al., 2022) to ensure high data quality, targeted women with CloudResearch’s demographic options, and did not include any MTurk worker qualifications (e.g., Approval rating, previous HITs completed). Respondents completed the survey between November 13th and November 15th 2022. We paid people $1.50 for a study we expected to take 10 minutes.

This example is for a fictional study but it contains all the important elements of a sample description. It reports the important demographic information of participants, describes the sampling procedure and sampling settings, and it tells readers when the data were collected and how participants were compensated.

The second example is from a paper by Gratz et al. (2020) published in Suicide and Life‐Threatening Behavior.

  • “Participants included a nationwide community sample of 500 adults from 45 states in the United States who completed online measures through an Internet‐based platform (Amazon’s Mechanical Turk; MTurk) from March 27, 2020, through April 5, 2020. The study was posted to MTurk via CloudResearch (cloudresearch.com), an online crowdsourcing platform linked to MTurk that provides additional data collection features (e.g., creating selection criteria; Chandler, Rosenzweig, Moss, Robinson, & Litman, 2019). MTurk is an online labor market that provides “workers” with the opportunity to complete different tasks in exchange for monetary compensation, such as completing questionnaires for research. Data provided by MTurk‐recruited participants have been found to be as reliable as data collected through more traditional methods (Buhrmester, Kwang, & Gosling, 2011[3] [4] ). Likewise, MTurk‐recruited participants have been found to perform better on attention check items than college student samples (Hauser & Schwarz, 2016) and comparably to participants completing the same tasks in a laboratory setting (Casler, Bickel, & Hackett, 2013). Studies also show that MTurk samples have the advantage of being more diverse than other Internet‐recruited or college student samples (Buhrmester et al., 2011; Casler et al., 2013). For the present study, inclusion criteria included (a) U.S. resident, (b) at least a 95% approval rating as an MTurk worker, (c) completion of at least 5,000 previous MTurk tasks (referred to as Human Intelligence Tasks), and (d) valid responses on questionnaires (i.e., assessed by accurate completion of multiple attention check items)… Participants who failed one or more attention check items were removed from the study (n = 53 of 553 completers). Workers who completed the study and whose data were considered valid (based on attention check items and geolocations; N = 500) were compensated $3.00 for their participation.”

Sample Description Guidelines for MTurk

Give some background about MTurk: Even though online participant recruitment is becoming the new norm, some reviewers are unfamiliar with the literature that documents the rise of online research. If you’re looking for sources to support your use of MTurk or approach to sampling, our book is a comprehensive guide to online sampling and data collection, with a focus on MTurk. You might also consult our papers describing CloudResearch’s MTurk Toolkit and how to sample naive people on MTurk.

For a review of common concerns and the evidence that does or doesn’t exist to support these concerns, see Hauser et al. (2019).

Mention if you used CloudResearch’s Approved Participants: If you use our Approved Participants to gather data you should mention it, because this is a group of participants that can only be accessed via CloudResearch. The Approved Participants group vastly improves data quality over open MTurk samples, consists largely of naïve participants, and has overall demographics that are similar to the broader MTurk population. See here for more details.

Don’t confuse CloudResearch with Mechanical Turk: CloudResearch and Amazon Mechanical Turk are independent companies. As described above, CloudResearch’s MTurk Toolkit sits on top of MTurk and enables researchers to run flexible studies while drawing participants from MTurk.

Don’t confuse panels with MTurk: CloudResearch profiles participants on MTurk by gathering voluntary demographic data. A group of people who meet specific demographic criteria is sometimes called “a panel.” A panel of MTurk participants, however, is not the same as sampling from Prime Panels (see more below). The language here can get confusing, so it’s best to just describe specific demographic filters that you used.


How To Describe Your Prime Panels Sample

Prime Panels is a participant recruitment platform that aggregates several market research panels integrated with the highest quality control system in the industry, Sentry®. Prime Panels offers access to tens of millions of people worldwide and is especially useful for gathering samples that are more representative of the US population (similar to Qualtrics Panels, Dynata, or Lucid) or not available on microtask sites like MTurk. For example, with Prime Panels it is easy to gather data from participants matched to the US Census those within specific US regions, or those in minority or hard-to-reach groups.

Examples of Prime Panels Sample Descriptions

Here are two examples of how to accurately describe participants recruited from Prime Panels in a Methods section. The first example is, again, a fictional paper sampling adults across several age ranges.

  • We recruited 350 adults of various ages from Prime Panels. Prime Panels aggregates several market research panels to enable data collection with large samples that are more representative of the US population than microtask sites like MTurk (Chandler et al., 2019). Past research has used Prime Panels to gather samples matched to the US Census (Malik et al., 2020) and stratified to the US income distribution (e.g., Davidai, 2018), among many other things. We used Prime Panels because of its ability to sample adults in their 60s and 70s (e.g., Chandler et al., 2019) and because of CloudResearch’s method for improving data quality. Data quality is a known concern on market research panels (e.g, Kees et al., 2017). Prime Panels employs a vetting procedure to prevent problematic respondents who provide low quality data from entering a study (e.g., Chandler et al., 2019; Litman et al., 2020).  

    Similar to prior studies, we split our sample into six groups of approximately 50 participants each, with each group corresponding to a different decade of age (20s through 70s). We expected the study to take 10 minutes. Participants were compensated based on the platform they were recruited through. All data were gathered in April 2019; data collection closed after three hours.

The second example is from a paper by Kroshus et al. (2020) published in JAMA Pediatrics:

  • “Participants were a demographically stratified convenience sample of 730 parents in the United States who have at least 1 child between the ages of 5 and 17 years. Recruitment was facilitated by Prime Panel[s]23 an online survey recruitment platform that sources participants across multiple nonprobability survey panels applying investigator-determined eligibility criteria and demographic quotas. Consistent with American Association for Public Opinion Research reporting guidelines for survey recruitment using an opt-in nonprobability panel, the participation rate was not reported because the sampling frame was unknown.24,25 Participants accessed the survey online using their own computer or mobile device and received a small monetary incentive for completion. Individuals without computer or internet access were not eligible to participate. Black and Hispanic parents were oversampled relative to 2018 US Census demographic distributions. To meet sample size goals for Black and Hispanic parents, data collection was supplemented by nonprobability survey panel recruitment with increased compensation for Black and Hispanic parents. Race and ethnicity were measured because of emergent evidence and discourse about how due to systemic racism, Black and Hispanic families are being disproportionately negatively affected by COVID-19.26-28 Eligible participants completed an English-language anonymous online survey hosted on the Qualtrics platform, with data collection occurring June 2, 2020, through June 5, 2020.”

Sample Description Guidelines for Prime Panels

Give background on Prime Panels: If MTurk still sounds like a novel way to collect data to some reviewers, market research panels are even newer. Therefore, it is important to provide some background about market research panels and how they are being used in academic research. Good sources for this information are the tenth chapter of our book and Chandler et al., 2019.

Don’t report the cost per participant as compensation: Compensation details are much more complicated for market research panels than for MTurk. You can find more information on how participants are compensated here. Reporting the cost per participant as the compensation participants receive is not accurate.


Credit Offer

It’s always rewarding to see others find our resources useful. As a token of our appreciation, when you publish an article using CloudResearch samples and cite us, you’ll be eligible for a $10 lab credit toward your next study! To qualify for this offer:

When using our Connect platform:

Hartman, R., Moss, A. J., Jaffe, S. N., Rosenzweig, C., Litman, L., & Robinson, J. (2023). Introducing Connect by CloudResearch: Advancing Online Participant Recruitment in the Digital Age. https://doi.org/10.31234/osf.io/ksgyr. Retrieved [Date].

When using our MTurk Toolkit, you can cite any of our publications about our tools, but these two tend to be the most useful.

Litman, L., Robinson, J., & Abberbock, T. (2017). TurkPrime.com: A versatile crowdsourcing data acquisition platform for the behavioral sciences. Behavior Research Methods, 49(2), 433-442. https://doi.org/10.3758/s13428-016-0727-z   

Hauser, D.J., Moss, A.J., Rosenzweig, C. et al. Evaluating CloudResearch’s Approved Group as a solution for problematic data quality on MTurk. Behav Res (2022). https://doi.org/10.3758/s13428-022-01999-x

When using Prime Panels:

Chandler, J., Rosenzweig, C., Moss, A. J., Robinson, J., & Litman, L. (2019). Online panels in social science research: Expanding sampling methods beyond Mechanical Turk. Behavior Research Methods, 51(5), 2022-2038. https://doi.org/10.3758/s13428-019-01273-7

.

Related Articles

SUBSCRIBE TO RECEIVE UPDATES