|PI:||Javed A. Aslam|
The project instead investigates a new information retrieval evaluation paradigm based on nuggets. The thesis is that while it is likely impossible to find all relevant documents for a query with respect to web-scale and/or dynamic collections, it is much more tractable to find all or nearly all relevant information, with which one can then perform effective and reusable evaluation, at scale and with ease. These atomic units of relevant information are referred to as ``nuggets'', and one instantiation of these nuggets is simply the sentence or short passage that causes a judge to deem a document relevant at the time of document assessment. At evaluation time, relevance assessments are dynamically created for documents based on the quantity and quality of relevant information found in the documents retrieved. This new evaluation paradigm is inherently scalable and permits the use of all standard measures of retrieval performance, including those involving graded relevance judgments, novelty, diversity, and so on; it further permits new kinds of evaluations not heretofore possible.