Abstract
This article addresses the issue of retrieval result diversification
in the context of social image retrieval and discusses the
results achieved during the MediaEval 2013 benchmarking.
38 runs and their results are described and analyzed in this
text. A comparison of the use of expert vs. crowdsourcing
annotations shows that crowdsourcing results are slightly different
and have higher inter observer differences but results
are comparable at lower cost. Multimodal approaches have
best results in terms of cluster recall. Manual approaches can
lead to high precision but often lower diversity. With this detailed
results analysis we give future insights on this matter.
in the context of social image retrieval and discusses the
results achieved during the MediaEval 2013 benchmarking.
38 runs and their results are described and analyzed in this
text. A comparison of the use of expert vs. crowdsourcing
annotations shows that crowdsourcing results are slightly different
and have higher inter observer differences but results
are comparable at lower cost. Multimodal approaches have
best results in terms of cluster recall. Manual approaches can
lead to high precision but often lower diversity. With this detailed
results analysis we give future insights on this matter.
Originalsprog | Engelsk |
---|---|
Titel | IEEE International Conference on Image Processing (ICIP) |
Publikationsdato | 2014 |
DOI | |
Status | Udgivet - 2014 |
Udgivet eksternt | Ja |