Choppy: Cut Transformer For Ranked List Truncation
- URL: http://arxiv.org/abs/2004.13012v1
- Date: Sun, 26 Apr 2020 00:52:49 GMT
- Title: Choppy: Cut Transformer For Ranked List Truncation
- Authors: Dara Bahri, Yi Tay, Che Zheng, Donald Metzler, Andrew Tomkins
- Abstract summary: Choppy is an assumption-free model based on the widely successful Transformer architecture.
We show Choppy improves upon recent state-of-the-art methods.
- Score: 92.58177016973421
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Work in information retrieval has traditionally focused on ranking and
relevance: given a query, return some number of results ordered by relevance to
the user. However, the problem of determining how many results to return, i.e.
how to optimally truncate the ranked result list, has received less attention
despite being of critical importance in a range of applications. Such
truncation is a balancing act between the overall relevance, or usefulness of
the results, with the user cost of processing more results. In this work, we
propose Choppy, an assumption-free model based on the widely successful
Transformer architecture, to the ranked list truncation problem. Needing
nothing more than the relevance scores of the results, the model uses a
powerful multi-head attention mechanism to directly optimize any user-defined
IR metric. We show Choppy improves upon recent state-of-the-art methods.
Related papers
- Optimizing Novelty of Top-k Recommendations using Large Language Models and Reinforcement Learning [16.287067991245962]
In real-world systems, an important consideration for a new model is novelty of its top-k recommendations.
We propose a reinforcement learning (RL) formulation where large language models provide feedback for the novel items.
We evaluate the proposed algorithm on improving novelty for a query-ad recommendation task on a large-scale search engine.
arXiv Detail & Related papers (2024-06-20T10:20:02Z) - List-aware Reranking-Truncation Joint Model for Search and
Retrieval-augmented Generation [80.12531449946655]
We propose a Reranking-Truncation joint model (GenRT) that can perform the two tasks concurrently.
GenRT integrates reranking and truncation via generative paradigm based on encoder-decoder architecture.
Our method achieves SOTA performance on both reranking and truncation tasks for web search and retrieval-augmented LLMs.
arXiv Detail & Related papers (2024-02-05T06:52:53Z) - ReFIT: Relevance Feedback from a Reranker during Inference [109.33278799999582]
Retrieve-and-rerank is a prevalent framework in neural information retrieval.
We propose to leverage the reranker to improve recall by making it provide relevance feedback to the retriever at inference time.
arXiv Detail & Related papers (2023-05-19T15:30:33Z) - Measurement and applications of position bias in a marketplace search
engine [0.0]
Search engines intentionally influence user behavior by picking and ranking the list of results.
This paper describes our efforts at Thumbtack to understand the impact of ranking.
We include a novel discussion of how ranking bias may not only affect labels, but also model features.
arXiv Detail & Related papers (2022-06-23T14:09:58Z) - Compactness Score: A Fast Filter Method for Unsupervised Feature
Selection [66.84571085643928]
We propose a fast unsupervised feature selection method, named as, Compactness Score (CSUFS) to select desired features.
Our proposed algorithm seems to be more accurate and efficient compared with existing algorithms.
arXiv Detail & Related papers (2022-01-31T13:01:37Z) - Learning to Select Cuts for Efficient Mixed-Integer Programming [46.60355046375608]
We propose a data-driven and generalizable cut selection approach, named Cut Ranking, in the settings of multiple instance learning.
Cut Ranking has been deployed in an industrial solver for large-scale MIPs.
It has achieved the average speedup ratio of 12.42% over the production solver without any accuracy loss of solution.
arXiv Detail & Related papers (2021-05-28T07:48:34Z) - PiRank: Learning To Rank via Differentiable Sorting [85.28916333414145]
We propose PiRank, a new class of differentiable surrogates for ranking.
We show that PiRank exactly recovers the desired metrics in the limit of zero temperature.
arXiv Detail & Related papers (2020-12-12T05:07:36Z) - Surprise: Result List Truncation via Extreme Value Theory [92.5817701697342]
We propose a statistical method that produces interpretable and calibrated relevance scores at query time using nothing more than the ranked scores.
We demonstrate its effectiveness on the result list truncation task across image, text, and IR datasets.
arXiv Detail & Related papers (2020-10-19T19:15:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.