A substantial fraction of the observational and experimental data collected in the social sciences is now coming from crowdsourcing platforms. The steady shift towards crowdsourced samples that has occurred over the last decade means that it is more important than ever to be acquainted with the characteristics and dynamics of such samples. This chapter aims to familiarize computational social scientists with crowdsourced samples, and in particular with what has become the most popular source of such samples across the social sciences: Amazon Mechanical Turk (MTurk). This chapter will answer several questions: Who joins the MTurk platform as a worker and why? How do MTurk samples compare to traditional samples in survey and experimental research such as undergraduate students? What challenges can researchers expect to face on MTurk? What are the ethical concerns that these online workplaces bring to the surface? We hope the answers to these questions will be useful to a variety of computational social scientists: aspiring survey and experimental researchers who want to familiarize themselves with the main characteristics of crowdsourcing; experienced researchers who seek a deeper understanding of the implications of their methodological choices; and outsiders who want to take a peek at how social science is conducted in the present and will be conducted in the future.
ZALLOT, C., PAOLACCI, G., CHANDLER, J. et SISSO, I. (2021). Crowdsourcing in observational and experimental research. Dans: Uwe Engel, Anabel Quan-Haase, Sunny Xun Liu, Lars Lyberg eds. Handbook of Computational Social Science, Volume 2. 1st ed. Routledge, pp. 140-157.