Abstract
Divergence From Randomness (DFR) ranking models assume that informative terms are distributed in a corpus differently than non-informative terms. Different statistical models (e.g. Poisson, geometric) are used to model the distribution of non-informative terms, producing different DFR models. An informative term is then detected by measuring the divergence of its distribution from the distribution of non-informative terms. However, there is little empirical evidence that the distributions of non-informative terms used in DFR actually fit current datasets. Practically this risks providing a poor separation between informative and non-informative terms, thus compromising the discriminative power of the ranking model. We present a novel extension to DFR, which first detects the best-fitting distribution of non-informative terms in a collection, and then adapts the ranking computation to this best-fitting distribution. We call this model Adaptive Distributional Ranking (ADR) because it adapts the ranking to the statistics of the specific dataset being processed each time. Experiments on TREC data show ADR to outperform DFR models (and their extensions) and be comparable in performance to a query likelihood language model (LM).
Original language | English |
---|---|
Title of host publication | Proceedings of the 25th ACM International Conference on Information and Knowledge Management |
Number of pages | 4 |
Publisher | Association for Computing Machinery |
Publication date | 2016 |
Pages | 2005-2008 |
ISBN (Electronic) | 978-1-4503-4073-1 |
DOIs | |
Publication status | Published - 2016 |
Event | 25th ACM International Conference on Information and Knowledge Management - Indianapolis, United States Duration: 24 Oct 2016 → 28 Oct 2016 Conference number: 25 |
Conference
Conference | 25th ACM International Conference on Information and Knowledge Management |
---|---|
Number | 25 |
Country/Territory | United States |
City | Indianapolis |
Period | 24/10/2016 → 28/10/2016 |