When it comes to online research, few topics loom as large as data quality. With recent well-documented cases of problematic respondents and fraud (Moss & Litman, 2018), data quality is often one of the key considerations when researchers choose a platform for online data collection.
Over the last decade or so, there has been a bit of a boom in the U.S beer industry. In 2010 there were about 1,800 breweries nationwide; today, there are more than 8,300. With that kind of growth comes A LOT of competition. Getting people to buy one beer over another depends not only on having a tasty product, but also on having a brand that sticks in people’s minds.
Shortly after it became clear that the Presidential election would be closer than polls forecast, many people began asking: how did the polls underestimate support for President Trump again?
Similar to 2016, the 2020 Presidential election was much closer than polls forecasted, particularly in battleground states. The disconnect between how the media covered Presidential polls in both 2016 and 2020 and the actual outcome of those elections left many people feeling misled.
Research is always - by definition - unpredictable, but there is one inevitable annual occurrence across the insights industry: the end of year rush. As project deadlines and budget expirations approach, online research increases dramatically each December.
Attention check questions are frequently used by researchers to measure data quality. The goal of such checks is to differentiate between people who provide high-quality responses and those who provide low quality or unreliable data. Although attention checks are an important component of measuring data quality, they are only one of several data points researchers should use to isolate and quarantine bad data. Others include measuring inconsistency in responses, nonsense open-ended data, and extreme speeding.