Recruiting study participants online using Amazon's Mechanical Turk


Crowdsourcing has had a dramatic impact on the speed and scale at which scientific research can be conducted. Clinical scientists have particularly benefited from readily available research study participants and streamlined recruiting and payment systems afforded by Amazon Mechanical Turk (MTurk), a popular labor market for crowdsourcing workers. MTurk has been used in this capacity for more than five years. The popularity and novelty of the platform have spurred numerous methodological investigations, making it the most studied nonprobability sample available to researchers. This article summarizes what is known about MTurk sample composition and data quality with an emphasis on findings relevant to clinical psychological research. It then addresses methodological issues with using MTurk--many of which are common to other nonprobability samples but unfamiliar to clinical science researchers--and suggests concrete steps to avoid these issues or minimize their impact.


Speaker

Jesse Chandler, PhD, a survey researcher at Mathematica Policy Research and an Adjunct Faculty Associate at the Institute for Social Research at the University of Michigan


Learning objectives

  • Describe the potential and strengths of Mechanical Turk as a complementary participant recruitment tool for clinical translational studies
  • Identify study types where MTurk is applicable
  • Describe basic features of MTurk and how they are used
  • Describe potential weaknesses (e.g., valid data quality, external validity of results) of using MTurk

Webinar

Slides

NIH Funding Acknowledgment: Important - All publications resulting from the utilization of SC CTSI resources are required to credit the SC CTSI grant by including the NIH funding acknowledgment and must comply with the NIH Public Access Policy.