Blog
Sunsetting the MTurk Toolkit

We’re retiring the Mechanical Turk (MTurk) Toolkit at CloudResearch. By the end of 2026, the product will no longer be available.
The MTurk Toolkit was our first product. It’s the thing that turned us into a company. Before the Mechanical Turk Toolkit, if you wanted to run a study, you were chasing students across a college campus, paying a pollster to walk around a mall, or grinding through cold calls until somebody picked up. We made all of that obsolete. Studies that used to take months and cost thousands of dollars suddenly took minutes and cost a fraction. That’s what democratizing research actually looked like, and we did it.
At its peak, thousands of researchers were running daily studies through the MTurk Toolkit. Today that number is in the single digits per day. So this is less a decision than an acknowledgment: the research community has already moved on.
Starting in 2018, MTurk began suffering a quality crisis it has never adequately addressed. To Amazon’s credit, they reached out to us repeatedly about working on a solution together. To my disappointment, those conversations never went anywhere. Every time they said “we’ll speak to you in a week,” it was more than a year before we’d hear from them again. The honest read on the past seven or eight years is that MTurk has not been managed anywhere near the hope and potential it had. There is no question it was a trailblazer and a leader at one point. But Amazon ceded that ground to others, including us, and the result is a story about mismanaging something that could have been a major disruptor. You don’t keep excellence unless you actively manage it.
For years, we held the line on what we could control. We invested heavily in finding the right participants, curating the pool, identifying low-attention behaviors, and giving the research community a slice of MTurk we could stand behind. We documented what we were seeing and how we were dealing with it in pieces like After the Bot Scare and our more recent work on AI agent detection. Eventually, even that work wasn’t enough. The fraud reached a level where I could not in good faith certify MTurk as reliable for research, and I wasn’t going to pretend otherwise.
By now, almost all of our customers have migrated to the tools we’ve built since: Connect for samples, Engage for surveys, and Sentry working in the background to keep the data clean. Both Connect and Engage have data quality controls built into them, not bolted on after the fact. Independent researchers, and even our competitors, will tell you the same.
So my head has been clear on this for a while. My heart has had a harder time with it.
I built the MTurk Toolkit from the ground up with Leib Litman and Tzvi Abberbock when this company wasn’t even called CloudResearch. Anyone remember TurkPrime? That product is the reason any of this exists. The team that built it deserves enormous credit, and Leib and Tzvi especially, for being my partners in that work from the very beginning.
I’m proud of what the MTurk Toolkit made possible for the research community, and I’m even prouder of what we’ve built to replace it. The mission hasn’t changed — only the tools.
Jonathan Robinson
Co-CEO and CTO, CloudResearch