Optimizing Test-Time Query Representations for Dense Retrieval
- URL: http://arxiv.org/abs/2205.12680v3
- Date: Sun, 28 May 2023 06:24:04 GMT
- Title: Optimizing Test-Time Query Representations for Dense Retrieval
- Authors: Mujeen Sung, Jungsoo Park, Jaewoo Kang, Danqi Chen, Jinhyuk Lee
- Abstract summary: TOUR improves query representations guided by test-time retrieval results.
We leverage a cross-encoder re-ranker to provide fine-grained pseudo labels over retrieval results.
TOUR consistently improves direct re-ranking by up to 2.0% while running 1.3-2.4x faster.
- Score: 34.61821330771046
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent developments of dense retrieval rely on quality representations of
queries and contexts from pre-trained query and context encoders. In this
paper, we introduce TOUR (Test-Time Optimization of Query Representations),
which further optimizes instance-level query representations guided by signals
from test-time retrieval results. We leverage a cross-encoder re-ranker to
provide fine-grained pseudo labels over retrieval results and iteratively
optimize query representations with gradient descent. Our theoretical analysis
reveals that TOUR can be viewed as a generalization of the classical Rocchio
algorithm for pseudo relevance feedback, and we present two variants that
leverage pseudo-labels as hard binary or soft continuous labels. We first apply
TOUR on phrase retrieval with our proposed phrase re-ranker, and also evaluate
its effectiveness on passage retrieval with an off-the-shelf re-ranker. TOUR
greatly improves end-to-end open-domain question answering accuracy, as well as
passage retrieval performance. TOUR also consistently improves direct
re-ranking by up to 2.0% while running 1.3-2.4x faster with an efficient
implementation.
Related papers
- Lexically-Accelerated Dense Retrieval [29.327878974130055]
'LADR' (Lexically-Accelerated Dense Retrieval) is a simple-yet-effective approach that improves the efficiency of existing dense retrieval models.
LADR consistently achieves both precision and recall that are on par with an exhaustive search on standard benchmarks.
arXiv Detail & Related papers (2023-07-31T15:44:26Z) - Graph Convolution Based Efficient Re-Ranking for Visual Retrieval [29.804582207550478]
We present an efficient re-ranking method which refines initial retrieval results by updating features.
Specifically, we reformulate re-ranking based on Graph Convolution Networks (GCN) and propose a novel Graph Convolution based Re-ranking (GCR) for visual retrieval tasks via feature propagation.
In particular, the plain GCR is extended for cross-camera retrieval and an improved feature propagation formulation is presented to leverage affinity relationships across different cameras.
arXiv Detail & Related papers (2023-06-15T00:28:08Z) - ReFIT: Relevance Feedback from a Reranker during Inference [109.33278799999582]
Retrieve-and-rerank is a prevalent framework in neural information retrieval.
We propose to leverage the reranker to improve recall by making it provide relevance feedback to the retriever at inference time.
arXiv Detail & Related papers (2023-05-19T15:30:33Z) - Noise-Robust Dense Retrieval via Contrastive Alignment Post Training [89.29256833403167]
Contrastive Alignment POst Training (CAPOT) is a highly efficient finetuning method that improves model robustness without requiring index regeneration.
CAPOT enables robust retrieval by freezing the document encoder while the query encoder learns to align noisy queries with their unaltered root.
We evaluate CAPOT noisy variants of MSMARCO, Natural Questions, and Trivia QA passage retrieval, finding CAPOT has a similar impact as data augmentation with none of its overhead.
arXiv Detail & Related papers (2023-04-06T22:16:53Z) - Learning Decoupled Retrieval Representation for Nearest Neighbour Neural
Machine Translation [16.558519886325623]
kNN-MT successfully incorporates external corpus by retrieving word-level representations at test time.
In this work, we highlight that coupling the representations of these two tasks is sub-optimal for fine-grained retrieval.
We leverage supervised contrastive learning to learn the distinctive retrieval representation derived from the original context representation.
arXiv Detail & Related papers (2022-09-19T03:19:38Z) - LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text
Retrieval [55.097573036580066]
Experimental results show that LaPraDoR achieves state-of-the-art performance compared with supervised dense retrieval models.
Compared to re-ranking, our lexicon-enhanced approach can be run in milliseconds (22.5x faster) while achieving superior performance.
arXiv Detail & Related papers (2022-03-11T18:53:12Z) - Pairwise Supervised Hashing with Bernoulli Variational Auto-Encoder and
Self-Control Gradient Estimator [62.26981903551382]
Variational auto-encoders (VAEs) with binary latent variables provide state-of-the-art performance in terms of precision for document retrieval.
We propose a pairwise loss function with discrete latent VAE to reward within-class similarity and between-class dissimilarity for supervised hashing.
This new semantic hashing framework achieves superior performance compared to the state-of-the-arts.
arXiv Detail & Related papers (2020-05-21T06:11:33Z) - Pseudo-Convolutional Policy Gradient for Sequence-to-Sequence
Lip-Reading [96.48553941812366]
Lip-reading aims to infer the speech content from the lip movement sequence.
Traditional learning process of seq2seq models suffers from two problems.
We propose a novel pseudo-convolutional policy gradient (PCPG) based method to address these two problems.
arXiv Detail & Related papers (2020-03-09T09:12:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.