How CloudResearch and IARPA Completed the Largest Longitudinal Online Research Project Ever

Aaron Moss, PhD

Within the behavioral sciences, the tools that researchers rely on have been rapidly refashioned in the last 10 to 20 years. Advancements in technology and increased access to high speed internet have given researchers more tools for reaching more diverse groups of people in ways that are faster, cheaper, and less burdensome than ever before. The result has been a research revolution.

High on the list of research techniques that have benefited from technology is longitudinal research. In longitudinal or tracking studies, researchers follow the same group of participants over time. Without the internet and modern technology, following the same group of people over time is, obviously, burdensome. Technology simplifies every aspect of longitudinal research, allowing researchers to understand how phenomena unfold over time.

In the past we have shared how to conduct longitudinal studies, how to best retain participants, and even what kind of retention rates researchers can expect. Here, we report how well these methods can work when they are employed to maximum effect. To do so, we describe how CloudResearch carried out the largest longitudinal study ever conducted online.


Why Did We Run This Large Longitudinal Study?

Within the last few years, the research arm within The Office of the Director of National Intelligence, known as IARPA, sponsored an incredibly ambitious research project with the MITRE corporation. The aim of the project was to examine how people make forecasts about future political events. To carry out this project on the scale IARPA imagined, they contracted with CloudResearch and several other organizations to run a large longitudinal project online.

The portion of the project that CloudResearch was responsible for required 2,000 participants to complete a two to three hour forecasting task each week for 16 consecutive weeks in Part 1. In Part 2, a smaller group of 1,200 people needed to complete the same forecasting tasks over 35 weeks.

CloudResearch was well-positioned to carry out this study because our team consists of experts in technology and online research. In addition, our company has some of the most detailed data on Amazon Mechanical Turk (MTurk) workers and how the platform operates. The combination of our research experience and MTurk knowledge gave us the best shot to succeed with this difficult data collection.


How Did Our Longitudinal Sampling Strategy Go?

To say the study was a challenge would be an understatement. Managing thousands of participants, troubleshooting study-related issues, sending reminder messages, posting study materials, and ensuring we hit our weekly targets required multiple people on our team to focus on the project full-time.

However, after several rounds of testing and establishing a consistent workflow, we began to see real progress toward the study’s objectives. Each week thousands of participants on MTurk logged into the study website, spent several hours reading and integrating political news, and then constructed detailed forecasts about how future events might unfold.

After the forecasts were submitted each week, our team read each forecast to ensure quality and paid participants for their time. Early on, we had the sense that we were retaining lots of participants from week to week, but it wasn’t until the end of the study that we understood just how good our retention rates were.

In Part 1 of the study, our aim was to retain as many participants as possible while learning what worked and what didn’t work within the study. In the end, we had 2,785 people complete at least one session (some people who dropped out were replaced). More importantly, 62.37% (1,737 out of 2,785) of people completed 15 of 16 weeks and 60.10% (1,674 out of 2,785) completed all sixteen weeks of the study!

In Part 2, our aim was to apply what we learned in Part 1 of the study to improve retention rates. The objective we promised to meet was having 80% of participants complete 80% of weekly sessions. Overall, we had 1,559 people complete at least one session. Of the 1,295 people who started week 1 of the study, 89.7% completed 28 of 35 weeks exceeding the 80/80 threshold we promised. Even more impressively, 85.25%  of people who started week 1 completed 32 of 35 weeks, and 57.14% completed all 35 weeks of the study!


What Made this Longitudinal Research Successful?

This study was successful for a variety of reasons, most of which are outlined in Hall et al., 2020 (a chapter in this book). It is, however, also worth highlighting the role that participants on MTurk played. People on MTurk are among the most intelligent, creative, and diligent participants across all online sources. Their diligence helped the CloudResearch portion of this study succeed while participants from other online platforms managed by other entities provided subpar results or failed to meet the objectives altogether

  1. Selecting Participants

    To select participants willing to participate in a long, demanding study, we combined CloudResearch data with surveys assessing participants’ interest in the study. Based on this data, we carefully selected who we invited to participate. By recruiting motivated and active participants at the start, we improved our chances of retaining people for over eight months and minimizing attrition.

  2. Study Transparency

    We told participants what would be required of them and how they would be compensated before enrollment in the study occurred. Transparency about what specifically we were asking people to do and how they would be compensated for it was a major part of our strategy to select the right set of participants.

    Our study paid participants well and included incentives for accuracy in the forecasting task. Given the time demanded of participants we compensated people with $8-10 per hour, on average.

  3. Bonus Payments

    Participants were entered into a lottery to win an extra $100 dollars at the midway point of the project and the end of the project based on the quality of their forecasts. This means that people could earn around $560 in base pay and a bonus of $100 if they won the lottery.

  4. Make Participation Easy

    A final puzzle piece crucial to our study’s success was making it easy for participants to complete the study.

    Based on pilot testing, we decided to adopt an “anytime” or “drop-in” mode of data collection. Each week, when the study opened, we sent participants a reminder and let them complete the forecasting task anytime that worked for them. As long as participants’ weekly forecast was completed by the deadline, it counted. This drop in mode of data collection allowed participants to log into the study any time, make new forecasts, update old ones, or just check-in.

    Another part of making the study as easy as possible for participants involved managing within and between session drop outs with timely communication. Members of our team were always available while the study was live to manage technical issues, answer participant questions, and troubleshoot problems. Clear communication in both emails and the study instructions were an important contributor to our overall success.


Online tools allow researchers to investigate more phenomena longitudinally than ever before. If you’re interested in running a longitudinal study, our MTurk Toolkit makes doing so easy and efficient. Furthermore, given the engagement of participants on MTurk you can expect high retention rates and low attrition. If, on the other hand, you’re interested in conducting a large or complex project but you lack the time, team, or expertise to do so, our managed research services can help. Just as we helped MITRE and IARPA successfully conduct one of the most intense longitudinal studies ever conducted, we can help make your project a success. Contact us at support@cloudresearch.com today!

Related Articles

SUBSCRIBE TO RECEIVE UPDATES