Panos Ipeirotis has been posting some very interesting information on his blog regarding a study of users on Amazon's Mechanical Turk (MT). For those not in the know, MT is basically a crowd-sourcing system, where "requesters" define small tasks that can be performed online (like adding text labels to a group of images) and offer a certain amount of money per task. "Workers" then choose which jobs they want to perform and receive small payments (in their US bank accounts) whenever they complete them. Most of the jobs are pretty menial, so the payments are quite low (a few cents per task) and workers are unlikely to get rich doing them. But the point is that all the tasks require humans to perform them, i.e. they can't be automated well by a computer program. The trick for the requesters is to design jobs in such a way that noisy results are removed automatically (e.g. by comparing results from different workers) - as the workers may be motivated more by time than quality when completing tasks.
Anyway, Panos has started using MT to help him with his research. While contracting out the research itself doesn't make sense (of course!), there are a number of very interesting ways different research related tasks can be done using the Mechanical Turk. The most obvious is to get MT users to label training data for experiments in Machine Learning. But MT can also be used to relatively easily set up cheap user-studies, such as those required for assessing result quality/user-satisfaction in Information Retrieval. I'm not exactly sure about what the ethics issues are for the latter use, but it does sound like a very good idea to take advantage of all the (bored) Internet users out there. Here is a simple example of a user-study, in which the authors ask MT users to name different colors that they present to them [via Anonymous Prof].
No comments:
Post a Comment