In early January 2020, we used CloudResearch to launch a study on Amazon Mechanical Turk – a platform that hosts studies online for workers (aka participants). Our study was investigating how uncertainty affects the development of paranoia, and whether paranoia is more affected by social uncertainty or general uncertainty in the environment.
A few months into 2020, the COVID-19 pandemic introduced global uncertainty. As we went into lockdown, we saw an opportunity to extend our study – what if we examined the development of paranoia as the pandemic unfolded?
In a paper recently published in Nature Human Behaviour1 we tracked paranoia and belief-updating as the pandemic progressed – prior to lockdown, during lockdown, and into reopening.
For a lab like ours, the pandemic presented an unprecedented opportunity to study human decision-making and belief formation under uncertainty.
Although people naturally experience some uncertainty in their lives, there are also occasionally periods of great historical uncertainty (e.g, plagues, terrorist attacks, natural disasters). Psychologists are seldom able to gather data as these events unfold. But, thanks to online platforms for collecting data, we were able to safely study people as the pandemic unfolded.
We measured people’s choices and how they changed as payoffs varied (what we call belief-updating) in an uncertain environment. Using two versions of the Probabilistic Reversal Learning (PRL) task2, a social and a non-social one, we aimed to see whether paranoia (the idea that others are out to get you) was more related to social or non-social uncertainty.
In the non-social PRL game, people had to choose between decks of cards that had different probabilities of drawing a winning card and earning points. After a few trials in which the participant chose a winning card, the best deck changed and players had to find the new best deck (i.e., the task required participants to update their prior beliefs).
The social version of the game was similar except that people had to choose between partners to work with on a project. The partners people could choose had different probabilities of helping the participant succeed or fail on a project.
Altogether we gathered data from 1,010 participants (including some pre-pandemic data2) over the course of 11 months.
Overall, we found no differences in decision-making or belief-updating in paranoid individuals between the non-social and social PRL versions. What we did find was that paranoia increased from March 2020 – when the World Health Organization (WHO) declared COVID-19 as a pandemic – to July 2020 – when more and more states slowly began to reopen.
We found that states that were more proactive with their lockdown policies – closing early and reopening later – had lower levels of self-reported paranoia. As states began to reopen, we saw elevated levels of paranoia in states that required the use of masks in public – especially in those states where people typically follow the rules and where people perceived others were not following mask rules – compared to states that only recommended mask covering.
As paranoia increased, individual decision-making became more erratic in our task – more paranoid people switched between decks or partners more often, even after selecting a winning card or reliable partner. We also modeled belief-updating using individual choices and rewards obtained from the games and found that paranoid individuals seem to hold an elevated initial belief of task volatility – anticipating more changes in the probabilities of choosing a winning card in a deck or choosing a reliable partner. Critically, these effects were identical in our two PRL task versions – the social version was not preferentially related to paranoia which suggests paranoia may be explained through domain-general mechanisms.
Although the internet provided us with an opportunity to study people during a natural period of uncertainty, online research presents a number of challenges not present with in-person studies. Primarily, identifying participants who yield quality data we can be confident in.
Early in our data collection, we pursued a one-pronged (slow but effective) approach to finding quality participants: we opened our study to participants on MTurk with a good reputation (>90% approval rate and >100 studies taken) and filtered participants based on (1) three cognitive reflection questions that were used to measure one’s mental acuity and (2) a few free-response questions regarding their performance on the behavioral task.
Despite these validation questions, we were still garnering several bot-like and non-sensical responses. This dramatically changed with the introduction of CloudResearch’s Data Quality feature.
After adopting CloudResearch’s Approved Group, we had a two-pronged approach to recruitment: we vetted high-quality respondents via CloudResearch and then validated responses with our validation measures. We found that this approach significantly improved the quality of our data.
Another feature we heavily relied on was the US Region Targeting tool. One of our analyses involved studying the impact of state mask policies on individual paranoia levels. The ability to target recruitment in specific regions of the US allowed us to improve the representativeness of our sample. We found that participants heralding from states that mandated mask wearing were more paranoid than those in states that merely recommended masking. Leveraging a technique from econometrics – Difference in Differences3 – we were able to infer that the mandate policy caused the increase in mistrust.
Our online study offered an unprecedented opportunity to study people and paranoia as a worldwide pandemic progressed. Nevertheless, our study was not an experiment. When making claims about causality it’s important to take into consideration unexpected changes in our samples and other threats to the validity of our data.
Some have posited that there were changes in the MTurk pool because of the pandemic – since more people were home during lockdown (for example). At least some data shows there was a greater proportion of Republicans participating4 on MTurk than in the past. This is important for us to understand since we found that participants who identified as Republican were more paranoid.
Using data from CloudResearch, we found that the demographic and geographic data of participants within our samples were consistent across pandemic periods. Furthermore, our samples were consistent with the demographics of available people in other CloudResearch samples at the time. Pooling data across 7,293 experiments comprising 2.5 million participants, we found that our participants did not differ from those available on the platform at the time of our studies.
Perhaps the data quality filters of CloudResearch (based on approval rate and studies completed) rendered our data relatively immune to influxes of new participants that may have coincided with our periods of interest. Furthermore, scrutinizing the data that shows changes in the participant pool more carefully reveals thatthe influxes of Republican participants occurred during lockdown, whilst our paranoia peak (and causal claims) were at reopening.
Despite the challenges of understanding how people responded to the pandemic, our study demonstrates some important points about how humans respond to uncertainty in their environment. In particular, our study shows that at times of great uncertainty, some people may grow less trusting of others and more erratic in their ability to update prior beliefs. If this is true, then some people may ironically become less willing to work with others at the time when we need others the most.