5.31.2011

Incentives work for online labor, too


A nice experimental paper on online labour market incentives just got presented at the ACM Conference on Computer Supported Cooperative Work (CSCW). It's by John Horton, Daniel Chen and Aaron Shaw and covers different incentives' impact on success rates for a set of simple Mechanical Turk questions. Shaw summarizes it nicely at his blog :
The results surprised us. They suggest that workers perform most accurately when the task design credibly links payoffs to a worker’s ability to think about the answers that their peers are likely to provide.
That priming technique is referred to catchily as "Bayesian Truth Serum" and its setup (from the paper online) behooves a little extra attention:
Bayesian Truth Serum or BTS (financial) “For the following five questions, we will also ask you to predict the responses of other workers who complete this task. There is no incentive to misreport what you truly believe to be your answers as well as others’ answers. You will have a higher probability of winning a lottery (bonus payment) if you submit answers that are more surprisingly common than collectively predicted.”
BTS was designed by Drazen Prelec at MIT, and you can get the original paper (I believe) here.

In short, the setup exemplifies what I think is particularly great about data analysis these days: Shaw, Horton, and Chen manage to put together a very tight, deeply informative and even slightly controversial (psych and sociological priming are not so effective) paper that really required nothing more than an internet connection, some computational power for the stats, and a clever eye. Go check it out.


No comments:

Post a Comment