Crowd IQ: Measuring the intelligence of crowdsourcing platforms

Michal Kosinski, Yoram Bachrach, Gjergji Kasneci, Jurgen Van-Gael, Thore Graepel

Publikation: Beitrag in Buch/Bericht/KonferenzbandKonferenzbeitragBegutachtung

42 Zitate (Scopus)

Abstract

We measure crowdsourcing performance based on a standard IQ questionnaire, and examine Amazon's Mechanical Turk (AMT) performance under different conditions. These include variations of the payment amount offered, the way incorrect responses affect workers' reputations, threshold reputation scores of participating AMT workers, and the number of workers per task. We show that crowds composed of workers of high reputation achieve higher performance than low reputation crowds, and the effect of the amount of payment is non-monotone-both paying too much and too little affects performance. Furthermore, higher performance is achieved when the task is designed such that incorrect responses can decrease workers' reputation scores. Using majority vote to aggregate multiple responses to the same task can significantly improve performance, which can be further boosted by dynamically allocating workers to tasks in order to break ties.

OriginalspracheEnglisch
TitelProceedings of the 4th Annual ACM Web Science Conference, WebSci'12
Herausgeber (Verlag)Association for Computing Machinery
Seiten151-160
Seitenumfang10
ISBN (Print)9781450312288
DOIs
PublikationsstatusVeröffentlicht - 2012
Extern publiziertJa
Veranstaltung4th Annual ACM Web Science Conference, WebSci 2012 - Evanston, IL, USA/Vereinigte Staaten
Dauer: 22 Juni 201224 Juni 2012

Publikationsreihe

NameProceedings of the 4th Annual ACM Web Science Conference, WebSci'12
Bandvolume

Konferenz

Konferenz4th Annual ACM Web Science Conference, WebSci 2012
Land/GebietUSA/Vereinigte Staaten
OrtEvanston, IL
Zeitraum22/06/1224/06/12

Fingerprint

Untersuchen Sie die Forschungsthemen von „Crowd IQ: Measuring the intelligence of crowdsourcing platforms“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren