In this blog, we outline the history of our Block Duplicate Geolocations tool, provide an overview of what geolocations are and the information they convey, and present the results of a study that examined the quality of data obtained from repeated geolocations that are not linked to server farms. We conclude by outlining the steps we are taking to change the default options on our Block Duplicate Geolocations feature.
Most social science research relies on convenience sampling of participants, meaning few samples look like let alone represent the general population. For many research questions, convenience samples are not a problem. Yet, for other questions, being able to capture and represent the opinions of people from different groups is essential. Because most researchers do not routinely gather these kinds of samples, knowing where to find one when it’s needed can be difficult.
People of different ages vary greatly in their beliefs and behaviors. For example, a recent Pew report outlines wide generational gaps in people’s opinions on several political issues like presidential job approval, perceptions of racism, views on immigration, and political ideology (Pew Research Center, 2018). Furthermore, some issues, like the use of Medicare, depend on age and therefore are more relevant to older adults than younger ones.
At CloudResearch, we advocate for requesters to treat workers fairly when posting HITs on Amazon’s Mechanical Turk (MTurk). Workers are, after all, the people who make the research possible. Sometimes situations arise in which an MTurk worker is unable to receive payment, despite having completed a survey. Below are two common scenarios in which a worker may not be paid, despite completing a survey:
Studying pairs of people (e.g., married couples, friends, coworkers, etc) is becoming increasingly common in the social and behavioral sciences. Online participant populations, such as Mechanical Turk and other online panels, can potentially serve as a rich source of dyadic...
About a month ago, we published After the Bot Scare blog on workers providing bad quality data on Amazon’s Mechanical Turk. This month, we followed up with our “farmers” to assess the effectiveness of the tools we created to deal with the problem. In this blog, we present data from our follow-up study and evidence to suggest our tools are working.