In Information Retrieval evaluation, pooling is a well-known technique
to extract a sample of documents to be assessed for relevance.
Given the pooled documents, a number of studies have proposed different prioritization methods to adjudicate documents for judgment. These methods follow different strategies to reduce the assessment effort. However, there is no clear guidance on how many relevance judgments
are required for creating a reliable test collection.
In this paper we investigate and further develop methods to determine when to stop making
relevance judgments.
We propose a highly diversified set of stopping methods and
provide a comprehensive analysis of the usefulness of the resulting test collections.
Some of the stopping methods introduced here combine innovative estimates of recall with time
series models used in Financial Trading.
Experimental results on several representative collections show that some stopping methods can reduce
up to 95% of the assessment effort and still produce a robust test collection. We demonstrate that the reduced set of judgments
can be reliably employed to compare search systems using disparate effectiveness metrics such as Average Precision, NDCG, P@100 and Rank Biased Precision. With all these measures, the correlations
found between full pool rankings and reduced pool rankings is very high.
Keywords: Information Retrieval, Evaluation, Pooling, Relevance Judgments, Stopping methods