Books:

Moss, A., Hartman, R., Litman, L., & Robinson, J. (2023). Research in the Cloud: A Guide to Online Behavioral Science. Forthcoming. 
A lab manual for students and researchers learning to conduct research online. Contact us to receive an advanced copy.  

Litman, L., Robinson, J. (2020). Conducting Online Research on Amazon Mechanical Turk and Beyond. April, 2020. SAGE Publications. https//us.sagepub.com/en-us/nam/conducting-online-research-on-amazon-mechanical-turk-and-beyond/book257367 
A guide to the world of online research and how to optimally carry out research projects with online samples. 

Pre-Prints and Peer-Reviewed Publications:

Frequently Cited and Noteworthy Papers  

Hartman, R., Moss, A. J., Jaffe, S. N., Rosenzweig, C., Litman, L., & Robinson, J. (2023). Introducing Connect by CloudResearch: Advancing Online Participant Recruitment in the Digital Age. https://osf.io/preprints/psyarxiv/ksgyr/ 
A white paper introducing Connect, CloudResearch’s innovative platform designed to revolutionize online participant recruitment in social and behavioral science research.  

Litman, L., Rosen, Z., Hartman, R., Rosenzweig, C., Weinberger-Litman, S. L., Moss, A. J., & Robinson, J. (2023). Did people really drink bleach to prevent COVID-19? A guide for protecting survey data against problematic respondents. PLoS ONE, 18(7). https://doi.org/10.1371/journal.pone.0287837 
An investigation into how problematic survey respondents were responsible for 100% of reported incidents of household cleaner ingestion, highlighting implications for online survey research practices.  

Moss, A. J., Hauser, D. J., Rosenzweig, C., Jaffe, S., Robinson, J., & Litman, L. (2023). Using Market-Research Panels for Behavioral Science: An Overview and Tutorial. Advances in Methods and Practices in Psychological Science, 6(2), https://doi.org/10.1177/25152459221140388 
An overview of market-research panels and considerations for using such panels for behavioral research.  

Hauser, D. J., Moss, A. J., Rosenzweig, C., Jaffe, S. N., Robinson, J., & Litman, L. (2022). Evaluating CloudResearch’s Approved Group as a solution for problematic data quality on MTurk. Behavior Research Methods, 1-12. https://link.springer.com/article/10.3758/s13428-022-01999-x 
A pre-registered study comparing CloudResearch’s Approved list and Blocked list to a Standard MTurk sample, demonstrating superior data quality among Approved list participants.  

Chandler, J., Rosenzweig, C., Moss, A. J., Robinson, J., & Litman, L. (2019). Online panels in social science research: Expanding sampling methods beyond Mechanical Turk. Behavior Research Methods, 51(5), 2022-2038. https://link.springer.com/article/10.3758/s13428-019-01273-7 
Study examining data quality and participants representativeness of Prime Panels as a participant recruitment platform. 

Robinson, J., Rosenzweig, C., Moss, A.J., Litman, L. (2019). Tapped out or barely tapped? Recommendations for how to harness the vast and largely unused potential of the Mechanical Turk participant pool. PLoS ONE 14(12): e0226394. https://doi.org/10.1371/journal.pone.0226394 
Analysis of the size of the MTurk participant tool, and suggestions for how to apply better sampling strategies to reach less experienced high-quality participants. 

Litman, L., Robinson, J., & Abberbock, T. (2017). TurkPrime.com: A versatile crowdsourcing data acquisition platform for the behavioral sciences. Behavior Research Methods, 49(2), 433-442. https://link.springer.com/article/10.3758/s13428-016-0727-z 
A description of the purposes and features of the CloudResearch MTurk Toolkit. 

Litman, L., Robinson, J., & Rosenzweig, C. (2015). The relationship between motivation, monetary compensation, and data quality among US-and India-based workers on Mechanical Turk. Behavior Research Methods, 47(2), 519-528. https://link.springer.com/article/10.3758/s13428-014-0483-x 
Examination of the impacts of compensation on data quality on MTurk. 

Connect Papers 

Hartman, R. (2023). The Implications of Viewing Political Opponents as Sheeple (Doctoral dissertation, The University of North Carolina at Chapel Hill). https://cdr.lib.unc.edu/concern/dissertations/gt54kz60c 
One of the first dissertation projects that recruited a sample from Connect, CloudResearch’s new platform for online participant recruitment.  

Hartman, R., Moss, A. J., Jaffe, S. N., Rosenzweig, C., Litman, L., & Robinson, J. (2023). Introducing Connect by CloudResearch: Advancing Online Participant Recruitment in the Digital Age. https://osf.io/preprints/psyarxiv/ksgyr/ 
A white paper introducing Connect, CloudResearch’s innovative platform designed to revolutionize online participant recruitment in social and behavioral science research.  

Moss, A. J., Budd, R. D., Blanchard, M. A., & O’Brien, L. T. (2023). The Upside of Acknowledging Prejudiced Behavior. Journal of Experimental Social Psychology, 104, 104401. https://doi.org/10.1016/j.jesp.2022.104401 
One of the first published papers that recruited a sample from Connect, CloudResearch’s new platform for online participant recruitment. 

Prime Panels Papers

Moss, A. J., Hauser, D. J., Rosenzweig, C., Jaffe, S., Robinson, J., & Litman, L. (2023). Using Market-Research Panels for Behavioral Science: An Overview and Tutorial. Advances in Methods and Practices in Psychological Science, 6(2), https://doi.org/10.1177/25152459221140388 
An overview of market-research panels and considerations for using such panels for behavioral research. 

Chandler, J., Rosenzweig, C., Moss, A. J., Robinson, J., & Litman, L. (2019). Online panels in social science research: Expanding sampling methods beyond Mechanical Turk. Behavior Research Methods, 51(5), 2022-2038. https://link.springer.com/article/10.3758/s13428-019-01273-7 
Study examining data quality and participants representativeness of Prime Panels as a participant recruitment platform. 

Litman, L., Hartman, R., Jaffe, S. N., & Robinson, J. (2020). County-level recruitment in online samples: Applications to COVID-19 and beyond. https://doi.org/10.31234/osf.io/g3xw7   
Describes a methodology for county-level sampling of online participants. 

MTurk Toolkit Papers 

Moss, A. J., Rosenzweig, C., Robinson, J., Jaffe, S. N., & Litman, L. (2023). Is it ethical to use Mechanical Turk for behavioral research? Relevant data from a representative survey of MTurk participants and wages. Behavior Research Methods, 1-20. https://link.springer.com/article/10.3758/s13428-022-02005-0 
Exploration of MTurk workers’ views of MTurk, satisfaction with requesters, and hourly wages.  

Hauser, D. J., Moss, A. J., Rosenzweig, C., Jaffe, S. N., Robinson, J., & Litman, L. (2022). Evaluating CloudResearch’s Approved Group as a solution for problematic data quality on MTurk. Behavior Research Methods, 1-12. https://link.springer.com/article/10.3758/s13428-022-01999-x 
A pre-registered study comparing CloudResearch’s Approved list and Blocked list to a Standard MTurk sample, demonstrating superior data quality among Approved list participants. 

Rivera, E. D., Wilkowski, B. M., Moss, A. J., Rosenzweig, C., & Litman, L. (2022). Assessing the Efficacy of a Participant-Vetting Procedure to Improve Data-Quality on Amazon’s Mechanical Turk. Methodology, 18(2), 126-143. https://doi.org/10.5964/meth.8331 
Tests the efficacy of CloudResearch’s pre-screening procedure for MTurk participants to ensure high-quality data on MTurk. 

Williams, M. T., Osman, M., Gallo, J., Pereira, D. P., Gran-Ruaz, S., Strauss, D., George, J. R., Edelman, J., & Litman, L. (2022). A clinical scale for the assessment of racial trauma. Practice Innovations, 7(3), 223. https://doi.org/10.1037/pri0000178 
Used CloudResearch’s Mechanical Turk Toolkit to recruit 941 diverse US participants to help validate the Racial Trauma Scale. 

Litman, L., Moss, A. J., Rosenzweig, C., & Robinson, J. (2021). Reply to MTurk, Prolific or panels? Choosing the right audience for online research. SSRN. http://dx.doi.org/10.2139/ssrn.3775075 
In a replication of Peer et al.’s (2021) original findings, the presented findings demonstrate CloudResearch data to be superior to that of Prolific. 

Manzi, F., Rosen, Z., Rosenzweig, C., Jaffe, S. N., Robinson, J., & Litman, L. (2021). New job economies and old pay gaps: Pay expectations explain the gender pay gap in gender-blind workplaces. https://doi.org/10.31234/osf.io/rdmte 
Examines pay expectations among men and women on MTurk.  

Moss, A. J., Rosenzweig, C., Jaffe, S. N., Gautam, R., Robinson, J., & Litman, L. (2021). Bots or inattentive humans? Identifying sources of low-quality data in online platforms. https://doi.org/10.31234/osf.io/wr8ds 
Provides evidence that much of the data quality problems on MTurk are tied to fraudulent users outside of the US and not bots. 

Suthaharan, P., Reed, E. J., Leptourgos, P., Kenney, J. G., Uddenberg, S., Mathys, C. D., Litman, L., Robinson, J., Moss, A. J., Taylor, J. R., Groman, S. M., & Corlett, P. R. (2021). Paranoia and belief updating during the COVID-19 crisis. Nature Human Behaviour, 5(9), 1190-1202. https://doi-org/10.1038/s41562-021-01176-8 
Leverages data from CloudResearch to ensure representative global sample. 

Litman, L., Robinson, J., Rosen, Z., Rosenzweig, C., Waxman, J., & Bates, L. M. (2020). The persistence of pay inequality: The gender wage gap in an anonymous online labor market. PloS ONE 15(2): e0229383. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0229383 
Description of the gender pay gap on MTurk and suggestions for how to pay equally. 

Moss, A. J., Rosenzweig, C., Robinson, J., & Litman, L. (2020). Demographic Stability on Mechanical Turk Despite COVID-19. Trends in Cognitive Sciences. https://www.cell.com/trends/cognitive-sciences/pdf/S1364-6613(20)30138-8.pdf 
Commentary on the stability of participant demographics on MTurk from Jan. 2019 through May 2020, a time frame that includes the first three months of the COVID-19 pandemic. 

Fordsham, N., Moss, A. J., Krumholtz, S., Roggina, T., Robinson, J., & Litman, L. (2019). Variation among Mechanical Turk Workers Across Time of Day Presents an Opportunity and a Challenge for Research. https://psyarxiv.com/p8bns/ 
How participant demographics and clinical symptomatology differ across time of day, and best practices for recruitment.  

Litman, L., Robinson, J., Weinberger-Litman, S. L., & Finkelstein, R. (2019). Both intrinsic and extrinsic religious orientation are positively associated with attitudes toward cleanliness: Exploring multiple routes from godliness to cleanliness. Journal of Religion and Health, 58(1), 41-52. https://link.springer.com/article/10.1007/s10943-017-0460-7 
Best practices for sampling specific groups of individuals on MTurk.  

Robinson, J., Rosenzweig, C., Moss, A.J., Litman, L. (2019). Tapped out or barely tapped? Recommendations for how to harness the vast and largely unused potential of the Mechanical Turk participant pool. PLoS ONE 14(12): e0226394. https://doi.org/10.1371/journal.pone.0226394 
Analysis of the size of the MTurk participant tool, and suggestions for how to apply better sampling strategies to reach less experienced high-quality participants. 

Litman, L., Williams, M. T., Rosen, Z., Weinberger-Litman, S. L., & Robinson, J. (2018). Racial disparities in cleanliness attitudes mediate purchasing attitudes toward cleaning products: A serial mediation model. Journal of Racial and Ethnic Health Disparities, 5(4), 838-846. https://link.springer.com/article/10.1007/s40615-017-0429-y 
Best practices for accurately sampling minorities.  

Litman, L., Robinson, J., & Abberbock, T. (2017). TurkPrime.com: A versatile crowdsourcing data acquisition platform for the behavioral sciences. Behavior Research Methods, 49(2), 433-442. https://link.springer.com/article/10.3758/s13428-016-0727-z 
A description of the purposes and features of the CloudResearch MTurk Toolkit. 

Schnur, J. B., Chaplin, W. F., Khurshid, K., Mogavero, J. N., Goldsmith, R. E., Lee, Y. S., Litman, L. & Montgomery, G. H. (2017). Development of the Healthcare Triggering Questionnaire in adult sexual abuse survivors. Psychological Trauma: Theory, Research, Practice, and Policy, 9(6), 714. https://europepmc.org/article/PMC/5659978 
Collecting open-ended data for scale development using online participants. 

Litman, L., Robinson, J., & Rosenzweig, C. (2015). The relationship between motivation, monetary compensation, and data quality among US-and India-based workers on Mechanical Turk. Behavior Research Methods, 47(2), 519-528. https://link.springer.com/article/10.3758/s13428-014-0483-x 
Examination of the impacts of compensation on data quality on MTurk. 

Litman, L., Rosen, Z., Spierer, D., Weinberger-Litman, S., Goldschein, A., & Robinson, J. (2015). Mobile exercise apps and increased leisure time exercise activity: A moderated mediation analysis of the role of self-efficacy and barriers. Journal of medical Internet research, 17(8), e195. https://www.jmir.org/2015/8/e195/ 
Explored how health behaviors of MTurkers conform to expected results and explored new behavioral health models relating to the relationship between the use of exercise apps and health. 

General/Other Papers

Litman, L., Rosen, Z., Hartman, R., Rosenzweig, C., Weinberger-Litman, S. L., Moss, A. J., & Robinson, J. (2023). Did people really drink bleach to prevent COVID-19? A guide for protecting survey data against problematic respondents. PLoS ONE, 18(7). https://doi.org/10.1371/journal.pone.0287837 
An investigation into how problematic survey respondents were responsible for 100% of reported incidents of household cleaner ingestion, highlighting implications for online survey research practices. 

Hartman, R., Moss, A. J., Rabinowitz, I., Bahn, N., Rosenzweig, C., Robinson, J., & Litman, L. (2022). Do you know the Wooly Bully? Testing era-based knowledge to verify participant age online. Behavior research methods, 1-13. https://link.springer.com/article/10.3758/s13428-022-01944-y 
Study evaluating a way to verify the ages of online respondents through a test of era-based knowledge. 

Weinberger-Litman, S. L., Rosen, Z., Rosenzweig, C., Rosmarin, D. H., Muennig, P., Carmody, E. R., … & Litman, L. (2022). Psychological distress among the first quarantined community in the United States: Initial observations from the early days of the COVID-19 crisis. Journal of Cognitive Psychotherapy, 35(4), 255-267. https://doi.org/10.1891/jcpsy-d-20-00039   
Used a historically significant sample of participants who were directly or peripherally related to “patient 1” — the first confirmed community-acquired case of COVID-19 in the New York area.  

Weinberger-Litman, S. L., Litman, L., Rosen, Z., Rosmarin, D. H., & Rosenzweig, C. (2020). A look at the first quarantined community in the USA: Response of religious communal organizations and implications for public health during the COVID-19 pandemic. Journal of religion and health, 59, 2269-2282. https://doi-org/10.1007/s10943-020-01064-x 
Collected data from a sample of Modern Orthodox Jewish participants, some of the first people quarantined in the US due to the COVID-19 pandemic. 

CloudResearch Blogs and Other Resources:

How to Use CloudResearch Products

General Guides for Online Research