Evaluating Information Retrieval Metrics Based on Bootstrap Hypothesis Tests
スポンサーリンク
概要
- 論文の詳細を見る
This paper describes how the bootstrap approach to statistics can be applied to the evaluation of IR effectiveness metrics. More specifically, we describe straightforward methods for comparing the discriminative power of IR metrics based on Bootstrap Hypothesis Tests. Unlike the somewhat ad hoc Swap Method proposed by Voorhees and Buckley, our Bootstrap Sensitivity Methods estimate the overall performance difference required to achieve a given confidence level directly from Bootstrap Hypothesis Test results. We demonstrate the usefulness of our methods using four different data sets (i.e., test collections and submitted runs) from the NTCIR CLIR track series for comparing seven IR metrics, including those that can handle graded relevance and those based on the Geometric Mean. We also show that the Bootstrap Sensitivity results are generally consistent with those based on the more ad hoc methods.
- 2007-09-15
著者
関連論文
- Evaluating Information Retrieval Metrics Based on Bootstrap Hypothesis Tests
- A further note on alternatives to Bpref (情報学基礎)
- On the Task of Finding One Highly Relevant Document with High Precision
- Comparing Metrics across TREC and NTCIR : The Robustness to System Bias
- Comparing metrics across TREC and NTCIR: the robustness to system bias (データベースシステム・情報学基礎)