Rachel Hartman, Ph.D.1, Aaron J. Moss, Ph.D.1,2, Shalom N. Jaffe1,3, Cheskie Rosenzweig1,4, Jonathan Robinson, Ph.D.1,5, & Leib Litman, Ph.D.1,5

1CloudResearch, 2Siena College, 3Fairleigh Dickinson University, 4Columbia University, 5Lander College

9.15.2023

 


Abstract

This paper introduces Connect, CloudResearch’s innovative platform designed to revolutionize the realm of online participant recruitment in social and behavioral science research. Operating as a marketplace, Connect facilitates interactions between researchers and participants, enabling the deployment of surveys and experiments constructed via third-party tools such as Qualtrics, SurveyMonkey, and Google forms. With its current focus on the U.S. demographic (with plans of future expansion to other English-speaking countries) and those aged 18 and above, Connect proves versatile in accommodating a diverse range of studies, including academic, market, and user experience research, as well as machine learning. Connect’s uniqueness lies in its tri-fold emphasis on advanced features, data quality, and affordability. Advanced attributes include collaborative tools, a flexible API, and capabilities supporting intricate study designs. To assure impeccable data quality, Connect incorporates Sentry®, CloudResearch’s proprietary participant vetting mechanism, coupled with stringent technical evaluations. Despite these advancements, Connect remains cost-effective, charging researchers the lowest service fee in the online recruitment sector. This paper delves deeper into the platform’s attributes, with particular attention to its emphasis on participant experience, advanced functionalities, and the robust data quality assurance methods in place.


I. Overview of CloudResearch’s Connect Platform

To meet the demands of the ever-evolving landscape of social and behavioral science research, CloudResearch has developed Connect, its flagship platform for online participant recruitment. Connect is a marketplace where researchers and participants can find each other. Researchers create surveys and experiments via third-party apps such as Qualtrics, SurveyMonkey, or Google forms, then post the surveys or experiments on the platform, along with a brief description and the offered pay. Participants receive email and/or text message notifications when new studies are launched, and can see all studies available to them on their participant dashboard. Participants can then preview the studies and choose whether or not to participate. Based on the participants’ performance, researchers can then choose whether to approve or reject their submission.

Participants on Connect are currently restricted to the U.S. (though we will be expanding to other English-speaking countries by the end of 2023), and must be at least 18 years old. Connect can be used for social and behavioral research, market research, polling, user experience research, machine learning, and anything in between.

How is Connect Unique?

Connect stands apart from other online participant recruitment platforms due to its focus on three core elements: advanced features, data quality, and affordability. These elements are not just add-ons but are deeply ingrained in the platform’s design and functionality.

Advanced Features. Connect offers a suite of advanced features, including the option to easily collaborate and share funds within and between labs, the ability to easily set quotas within a study, the option to send text messages for instant study notifications, advanced researcher ratings, a built-in communication system, and more. Connect supports complex study designs with features such as survey platform integration, support for longitudinal studies, bonusing options, and an API for more flexibility. These features make it easier for researchers to implement sophisticated research protocols and complex studies. We discuss these advanced features in more detail in Section II.

Data Quality. Data quality is a fundamental aspect of Connect. We screen participants using Sentry®, CloudResearch’s patented[1] participant vetting system, to ensure they are attentive, engaged, and provide high-quality responses. The CloudResearch team regularly monitors the Connect participant pool data quality and reviews researcher feedback to identify and quarantine bad actors. Connect also includes various technical checks to ensure that participants are using a single account, are not submitting multiple responses from one IP address, and are located in the United States. We discuss data quality in more detail in Section III.

Affordability. Despite offering superior features and higher-quality data, Connect charges the lowest service fee of any online recruitment platform, at just 25% of what researchers pay participants. As of the time of writing, there are no additional fees for demographic targeting or Census-matched samples, offering significant cost savings over alternative platforms.

In This Paper

In the section below, we delve into Connect’s features in more detail, beginning with our emphasis on the participant experience, from our dedicated subreddit community to the built-in communication tools and fair compensation standards set on Connect. We then dive into the Connect features that support collaborative, representative, replicable, and complex studies. Following the discussion of Connect’s main features, we dedicate a section of this paper to the data quality on Connect, providing detailed information about the methods we use to obtain high quality data and our monitoring system for continuously measuring and maintaining the data quality on Connect.


II. Connect Features for Facilitating Responsible Research

Participants as the Heart of Online Research

In designing Connect,  we wanted to place research participants front and center. That is because we view participants as active partners in the research process, and too often across online research platforms, participants are an afterthought. After listening to participants and learning from the shortcomings of other platforms, we built several features, both on and off Connect, that improve upon the participant experience. These include our Connect subreddit for community engagement, a system for participants to provide project feedback and rate researchers, a built-in messaging system that allows participants and researchers to communicate with each other while protecting participants’ anonymity, the ability for participants to provide real-time feedback on studies and easily report researchers who are not treating participants fairly, and minimum requirements for project compensation.

Connect Subreddit. Early in the development of Connect, we created a subreddit where participants could provide feedback, request features, and discuss issues amongst themselves as well as with CloudResearch admin. This Connect community has helped us stay Connected (pardon the pun) to the people without whom human subjects research would not be possible. Though it is not strictly part of the Connect platform, it has been, and continues to be, integral to the development and maintenance of Connect, and we would like to acknowledge the contributions of this community.

 Project Feedback and Researcher Ratings. One thing we learned from our Connect community is that participants like to have their voices heard. In most other online platforms, however, there is no mechanism for participants to tell researchers about an issue with a project or to report when projects or researchers violate a site’s terms of service. To give participants a voice on Connect we created a system for real-time project feedback and a separate system for rating a project and researcher after the project is complete. 

The feedback participants provide begins while the project is live. If there are technical issues or other errors, participants can report this while working on the project. If multiple technical mistakes are reported by different participants the project gets automatically paused to prevent other participants from encountering the same issues. At the completion of a project, participants are asked to provide ratings. In addition to an overall project rating, they can also specify ratings for various aspects of the project: User Experience, Fairness, Time, and Compensation. There is also an optional textbox to provide written feedback, which can be shared with the researcher or just the CloudResearch team.

The expanded rating system allows participants to rate the user experience, fairness of the project, accuracy of the time estimate, and quality of the compensation. All ratings contribute to a researcher’s reputation.

The feedback participants give is used in three ways:

  1. Other participants can see the project ratings, and use them to inform their decision to participate in the project or not.
  2. The researcher can view the ratings and feedback and use the data to make adjustments if needed. For example, they might realize that they had misestimated the completion time, and should raise the pay. Or there might have been a typo or programming error in the survey that participants flagged.
  3. The CloudResearch team can review the ratings and use them to flag any inappropriate or potentially fraudulent behavior.

In addition to ratings at the project level, these ratings are also used to provide a holistic Researcher Rating, which is visible to participants as well as the researcher.

An example reputation summary for a researcher. This shows average ratings provided by participants across four categories.

Built-in Messaging. On Connect, we wanted to make communication between participants and researchers as seamless as possible while maintaining participants’ anonymity; anonymity is often a requirement of Institutional Review Boards, but lacking in many online platforms. Our built-in messaging system allows researchers and participants to send each other messages without needing to include any identifying information. Recipients receive a notification on Connect as well as to their email. This messaging system also comes in handy when researchers want to inform participants about their eligibility for follow up studies, or ask for more detail about their survey responses.

Connect’s in-app messaging system allows researchers and participants to communicate efficiently and securely. 

Reporting Issues and Banning Researchers. This does not happen very often, but on any online platform, there is potential for fraud and abuse, and at CloudResearch, we take this concern very seriously. That is why participants on Connect have the ability to easily report an issue if they suspect wrongdoing. We review every report, and reach out to the researcher for further details, or ban them from using Connect if we find evidence of fraud.

Participants can easily report an issue, and choose if they want to share the report with the researcher.

 

Participants also have the option of blocking researchers. This does not prevent the researcher from conducting studies, but it prevents the participant from seeing any future studies by the researcher. We examine these bans as well, following up when needed to ensure the best experience for both participants and researchers.

Participants have a host of options including contacting researchers, providing feedback, reporting issues, returning projects, and blocking researchers.

Fair Compensation. Last but not least, placing participants at the heart of everything we do means making sure they are compensated fairly for their time and effort. Though most participants view survey-taking as a form of paid leisure, there is still an expectation for fair compensation. On Connect, we require researchers to pay participants a minimum of $6 per hour, but we recommend at least $7.50 as a starting point. For more complex studies, such as those requiring participants to download an app, write lengthy responses, or participate in multiple waves, we recommend boosting the pay to around $10 per hour. On average, participants earn $9-10 per hour on Connect. Participants can choose to get paid through a variety of methods including PayPal, bank transfers, and Amazon gift cards.

Collaborative Science

Modern science isn’t about a lone scientist in a quiet lab anymore. Nowadays, it’s all about teamwork. Researchers from all kinds of backgrounds, from undergrads to experts with years of experience, work together. And these teams don’t just stick to their own labs; they connect with other places too, sharing data and ideas, making science a worldwide effort. However, this collaboration doesn’t happen seamlessly. Especially when it comes to running studies, most platforms don’t offer an easy way for researchers to join efforts. Some researchers resort to using a joint account (which can be a security concern, and makes it difficult to track who’s doing what). Others simply have very limited visibility into their colleagues’ contributions and need to manage complicated logistics for funding joint projects or paying for their students’ studies.

Connect Teams. Our ‘Teams’ feature revolutionizes collaborative research. Sharing  passwords and accounts is not just unwieldy but is also a security risk. Connect’s Teams allows you to easily share and manage projects with colleagues, all while working from your individual accounts. Connect’s intuitive interface allows you to easily transition into teams mode, giving you the option to create and manage your teams and team projects. Teams also allows you to pool resources and share funds, without sharing your credentials. This ensures efficiency and a heightened level of security. Lab managers can conveniently fund their teams without compromising their login information, and monitor expenditure to maintain accountability.

Connect Teams allows you to share projects with others, and use a shared wallet.

Representative, Replicable, Reliable Research

We may enjoy a good alliteration, but our true passion is the essence of these four ‘R’s: Representative, Replicable, Reliable Research. Connect’s tools allow researchers to easily access a broad, diverse population, including setting quotas for various demographics and applying census-matching to ensure the research is more representative, replicable, and reliable.

Targeting Demographics. The number of demographics and other qualifiers on Connect is vast and continuously growing. At the time of writing, there were over 120 questions that could be used to target participants. These include basic demographics, such as age, race, gender, education, income, and employment status (and more), as well as various other targeting options such as: whether participants listen to podcasts, what their level of programming ability is, whether they live with roommates, own a gun, invest in cryptocurrencies, and many more.

Furthermore, there is a quick and simple process for requesting additional demographics or targeting criteria, in case we don’t already have what you’re looking for. You simply need to log in, create a project, and select “Demographic Targeting.” On the pop-up page, select “Request a Demographic,” then fill out the prompts. It’s that easy! Our team will then review your request and add the questions to the participant dashboard, where participants will be able to respond to them. Within a week or two, you will be able to target based on the requested criteria.

Connect allows you to request any demographic or targeting criteria you want by simply filling out a short form.  

Census Matching. One of the key factors in improving the reliability and replicability of one’s study is the range of people participating. Studies that only include certain demographics may be difficult to replicate in the broader population (though this depends on the type of study). This is why we made census matching as easy as clicking a button, and 100% free!

Census matching with a click of a button on Connect

Setting Quotas. Imagine you wanted to collect a sample of Democrats, Republicans, and Independents. How would you go about doing that? On other platforms, you would need to create three separate studies. This is time-consuming and error-prone. Connect solves this problem with built-in quotas. All you need to do is select the demographics you’re interested in (e.g., political identity), and then on the next page, toggle the “Quota Targeting” button. From there, it is easy to select which options you want to include, and what percent of participants you want to fall under each category. Our system intelligently sends the survey out to participants who are more difficult to target first, for faster data collection.

Setting quotas on Connect is quick and easy, with no need to create separate surveys for different groups.

Beyond basic demographics: Recruiting participants based on validated scales  

Researchers often need to recruit participants based on their psychological profiles rather than simple demographics. For example, researchers might be interested in participants who are low on conscientiousness, high on depression, or those who exhibit elevated levels of prejudicial attitudes. These are sometimes referred to as psychographic characteristics.

Most psychographic characteristics cannot be measured with a single pre-screen question, and require the use of validated multi-question instruments to be measured accurately. For this reason, Connect created a unique system for recruiting participants based on validated scales, including custom scale instruments that researchers can upload themselves. This is made possible by our system of collecting scale data from participants when they sign up and over time, separately from researchers’ studies.

For example, researchers can target based on specific scales, such as the Patient Health Questionnaire (PHQ-9), the Eating Disorder Examination Questionnaire-Short (EDE-QS) or the General Anxiety Disorder scale (GAD-7).

Further, researchers can target different points along the scale, enabling them to, for example, set quotas for participants with high/moderate/low anxiety. The scale targeting feature has vast potential, enabling personality psychologists to differentiate individuals based on the Big Five Inventory, health professionals to assess physical activity or dietary habits with the International Physical Activity Questionnaire (IPAQ) or Food Frequency Questionnaire (FFQ), educators to understand student motivations using the Learning Strategies Scales, and mental health experts to classify emotional well-being or stress levels through instruments like the Generalized Anxiety Disorder 7-item (GAD-7) scale or the Perceived Stress Scale (PSS). Some of these scales are already present on Connect, others will be added according to demand.

Researchers can target any point along the scale, and set quotas for each targeted group.

Platform Level Targeting. Not only is it easy to target a vast range of demographics, it is also easy to select particular participants to include or exclude, based on participant IDs, participant groups, or specific projects they’ve participated in.

Targeting specific participants for inclusion or exclusion has never been easier!

Supporting Complex Study Designs

Modern research frequently demands innovative and intricate study designs that go beyond the standard cross-sectional survey methodology. To successfully conduct these studies, researchers often need greater access to participants and real-time notifications, communication, and data collection. Therefore, we have designed Connect with these dynamic needs in mind, offering a host of features designed to streamline the implementation of intricate research protocols and complex studies. These include the ability to integrate with any survey platform for seamless data collection, text notifications to support real-time data collection, support for longitudinal studies and experience sampling, including participant groups and preprogrammed launch times, bonusing options, and an API for more flexibility.

Survey Platform Integration.  For seamless data collection, researchers have the option to use any survey platform or experimental tool of their choice, including popular survey platforms like Qualtrics, SurveyMonkey, and Google Forms, as well as any other platform. This allows researchers to easily launch studies created on these platforms on Connect, and gather responses in real time.

Text Notifications. With the rise of Ecological Momentary Assessment (EMA) studies, real-time data collection has become increasingly vital. The capacity to collect data in real time, such as immediate reactions to certain stimuli or time-stamped behavior logs, provides a powerful tool for researchers. To cater to this demand, we have introduced Text Notifications on Connect. This feature enables researchers to send SMS notifications to participants as reminders for study sessions, or as triggers for EMA surveys, improving the convenience and accuracy of the data collected.

Longitudinal Studies. We also understand the power and importance of longitudinal research in understanding change over time. To facilitate this, we’ve designed features to make managing longitudinal studies easier. One such feature allows the creation of Participant Groups, providing an easy way to invite the same group of participants to multiple study sessions. Additionally, you can pre-program launch times for different waves of your study, ensuring participants receive invitations at the right moment. This removes the hassle of manual scheduling, letting researchers focus on the research itself.

Bonusing Options. To incentivize participant engagement and completion, we provide a simple option for issuing bonuses. These bonuses can be given for a variety of reasons: completion of a particularly complex task, adherence to a multi-session study, or rewarding quality input, among others. By allowing researchers the flexibility to customize their bonus strategies, we provide them with a powerful tool for improving response rates and data quality.

API. Our application programming interface (API) offers further flexibility for researchers. The API allows Connect to be integrated with external systems, opening a world of possibilities for researchers who want to automate study management or create more complex designs. This feature is particularly useful for researchers who want to automate the recruitment process, or want to integrate Connect with other platforms, applications, or data analysis software.


III. Data Quality and Fraud Prevention

The Importance of Data Quality

Every researcher knows data quality is essential. What is less well known, however, is exactly how bad problems with data quality have gotten in online research. Across platforms, between 30 and 40 percent of data are fraudulent (Berry et al., 2022; Chandler et al., 2019; Litman et al, 2023). This fraud may cost companies billions of dollars annually, misleads researchers, and makes it harder to advance scientific knowledge (Dixit & Bai, 2023; Litman, 2023).

Preventing Data Quality Issues on Connect

Protecting data quality on Connect begins with deterring fraud during the sign up process. To do that, we combine technical and behavioral measures. On the technical side, we ensure that each participant can create only one account, that their IP address matches their reported location, and that the bank or PayPal accounts people use to cash out their earnings are unique across participants. We also look at the device people are using to take projects and ensure that the same device is not associated with multiple participant accounts on the same project more than once.

On the behavioral side, we require each participant to complete an onboarding process that includes screening with our Sentry system. As part of onboarding, we assess attention, honesty, language comprehension, willingness to follow instructions, and much more. During this assessment, our system looks for red flags such as translating the survey out of the language it is written in, disregarding instructions, copying and pasting answers, and the use of automation. Whenever red flags appear, we scrutinize the participant further to decide whether they should be on Connect or not.

An especially important part of our screening process is our assessment of open-ended responses provided during onboarding.. We use a combination of human and automated evaluation to score each and every open-ended response. We then combine that assessment with the behavioral and technical data we’ve gathered and make a decision about whether the participant should be allowed onto Connect. Together these measures allow us to control quality from the sign up process forward, and each month we block thousands of people who would otherwise provide low quality data.

Continuous Data Quality Monitoring.

Our focus on quality does not end after participants are allowed onto Connect. We run periodic surveys to check data quality from specific participants, and we track quality each month on the platform as a whole. Whenever we find instances of questionable quality, we investigate and assess whether that participant should continue on the site. In addition to these active measures, we continuously monitor activity behind the scenes to look for suspicious financial transactions, the reemergence of previously blocked accounts, and other behaviors that would warrant further scrutiny.

Monthly Data Quality Tracker. Every month, our team randomly samples about 10% of the Connect population. We conduct surveys on a rotating set of topics and we include three attention check questions, among other measures of quality. When we analyze the data, we are interested in what percentage of participants pass all our  attention checks. As shown in the figure below, the number is often in the upper nineties. In fact, since we started running these surveys in November of 2022, the average percentage of participants passing three out of three attention checks has been 98.1%.

Researcher Input. Another way our team monitors ongoing data quality is with feedback from researchers. Several tools within Connect allow researchers to provide data that we can use to monitor quality. For example, similar to other research sites, researchers can reject participants who provide poor quality data. Rejecting people who do not follow project instructions or who provide subpar data allows the researcher to save on the cost of paying that participant and help maintain the health of the platform overall. If a participant continually receives rejections from different researchers, we will investigate and assess whether the participant should continue on the site.

Similar to issuing rejections, researchers can “flag” a participant. Flagging does not prevent the participant from getting paid, but it does serve as a notice to CloudResearch that a participant may need to be investigated further. Flagging is a good option for researchers who want to help maintain the overall health of the platform but who are unable to reject participants either because of institutional review policies or personal preferences. We collect data on flags and assess the quality of participant’s data with further surveys and additional scrutiny. If quality does not improve, the participants may eventually be removed from the site.

Finally, researchers can control quality by placing participants onto a Universal Exclude list. This list is specific to each research account, and whenever a participant is placed onto the list that person can no longer participate in any studies launched by the researcher. In other words, the Universal Exclude list is a way to banish a participant from all your future studies. In assessing quality, we use data from researchers’ exclude lists. When the same participant ID is excluded multiple times by different researchers, we look into the participant further, removing the participant if necessary.     


IV. Conclusion

In the rapidly changing landscape of behavioral science research, the need for efficient and affordable online participant recruitment platforms has never been more evident. CloudResearch’s Connect platform addresses this need, offering a comprehensive solution tailored to meet the diverse needs of researchers.

Connect’s commitment to ensuring data quality is unparalleled. By integrating advanced screening methods, continuous monitoring, and feedback mechanisms, the platform guarantees that researchers receive attentive, engaged, and high-quality responses from participants. Further, Connect’s emphasis on the ethical treatment of participants sets it apart. By fostering a community through the Connect subreddit, ensuring fair compensation, and providing a platform for feedback and communication, Connect places participants at the heart of the research process. This approach not only ensures ethical standards are met but also enhances the quality and reliability of data collected. The full suite of advanced features on Connect, from collaboration tools to sophisticated study design support, showcases the platform’s adaptability to modern research requirements. These features not only simplify the research process but also enhance its efficiency and effectiveness. Despite its advanced features and commitment to data quality, Connect remains one of the most affordable online recruitment platforms. This affordability, combined with its superior features, positions Connect as a preferred choice for researchers seeking quality data without compromising on cost.

The Future of Connect:

With plans to expand to other English-speaking countries by the end of 2023 and continuous improvements based on user feedback, Connect is poised to lead the way in online research platforms.

For Researchers, Connect is set to enhance its messaging capabilities, streamline project management, and introduce innovative tools to support diverse study types. The platform is also focusing on refining data quality measures and expanding its suite of automation and analysis features.

For Participants, Connect aims to elevate the user experience by refining notification systems, offering more insights into participation benefits, and expanding accessibility through new platforms. The introduction of loyalty and referral programs will further incentivize participation.

These upcoming enhancements underscore Connect’s commitment to innovation, user-centric design, and its dedication to setting industry standards. By continuously evolving, Connect aims to cater to the dynamic needs of the research community while safeguarding participant interests.

References

Berry, C., Kees, J., & Burton, S. (2022). Drivers of data quality in advertising research: Differences across MTurk and professional panel samples. Journal of Advertising, volume 51(4), 515-529. https://doi.org/10.1080/00913367.2022.2079026

Chandler, J., Rosenzweig, C., Moss, A. J., Robinson, J., & Litman, L. (2019). Online panels in social science research: Expanding sampling methods beyond Mechanical Turk. Behavior Research Methods, 51, 2022-2038. https://doi.org/10.3758/s13428-019-01273-7

Coppock, A., Leeper, T. J., & Mullinix, K. J. (2018). Generalizability of heterogeneous treatment effect estimates across samples. Proceedings of the National Academy of Sciences, 115(49), 12441-12446.

Dixit, N., & Bai, A. (2023, May 1). Businesses and investors are losing billions to fraudulent market research data. Here’s how to fix it. Nasdaq. https://www.nasdaq.com/articles/businesses-and-investors-are-losing-billions-to-fraudulent-market-research-data.-heres-how?amp

Lease, M., Hullman, J., Bigham, J., Bernstein, M., Kim, J., Lasecki, W., Bakhshi, S., Mitra, T., & Miller, R. (2013, March 6). Mechanical Turk is not anonymous. SSRN. https://ssrn.com/abstract=2228728 or http://dx.doi.org/10.2139/ssrn.2228728

Litman, L. (2023, July 14). Are businesses losing billions to market research fraud? https://www.cloudresearch.com/resources/blog/are-businesses-losing-billions-to-market-research-fraud/

Litman, L., Rosen, Z., Hartman, R., Rosenzweig, C., Weinberger-Litman, S. L., Moss, A. J., & Robinson, J. (2023). Did people really drink bleach to prevent COVID-19? A guide for protecting survey data against problematic respondents. PLOS One. https://doi.org/10.1371/journal.pone.0287837

Moss, A.J., Rosenzweig, C., Robinson, J. et al. Is it ethical to use Mechanical Turk for behavioral research? Relevant data from a representative survey of MTurk participants and wages. Behav Res (2023). https://doi.org/10.3758/s13428-022-02005-0

Mullinix, K., Leeper, T., Druckman, J., & Freese, J. (2015). The Generalizability of Survey Experiments. Journal of Experimental Political Science, 2(2), 109-138. doi:10.1017/XPS.2015.19


[1] U.S. Patents 10,572,778, 11,080,656, and 11,227,298