Abstract
The 2013 MediaEval Crowdsourcing task looked at the problem of working with noisy crowdsourced annotations of image data. The aim of the task was to investigate possible techniques for estimating the true labels of an image by using the set of noisy crowdsourced labels, and possibly any content and metadata from the image itself. For the runs in this paper, we've applied a shotgun approach and tried a number of existing techniques, which include generative probabilistic models and further crowdsourcing.
Original language | English |
---|---|
Journal | CEUR Workshop Proceedings |
Volume | 1043 |
State | Published - 2013 |
Externally published | Yes |
Event | 2013 Multimedia Benchmark Workshop, MediaEval 2013 - Barcelona, Spain Duration: 18 Oct 2013 → 19 Oct 2013 |