Boot and Switch: Alternating Distillation for Zero-Shot Dense Retrieval
- URL: http://arxiv.org/abs/2311.15564v1
- Date: Mon, 27 Nov 2023 06:22:57 GMT
- Title: Boot and Switch: Alternating Distillation for Zero-Shot Dense Retrieval
- Authors: Fan Jiang, Qiongkai Xu, Tom Drummond, Trevor Cohn
- Abstract summary: $texttABEL$ is a simple but effective unsupervised method to enhance passage retrieval in zero-shot settings.
By either fine-tuning $texttABEL$ on labelled data or integrating it with existing supervised dense retrievers, we achieve state-of-the-art results.
- Score: 50.47192086219752
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural 'dense' retrieval models are state of the art for many datasets,
however these models often exhibit limited domain transfer ability. Existing
approaches to adaptation are unwieldy, such as requiring explicit supervision,
complex model architectures, or massive external models. We present
$\texttt{ABEL}$, a simple but effective unsupervised method to enhance passage
retrieval in zero-shot settings. Our technique follows a straightforward loop:
a dense retriever learns from supervision signals provided by a reranker, and
subsequently, the reranker is updated based on feedback from the improved
retriever. By iterating this loop, the two components mutually enhance one
another's performance. Experimental results demonstrate that our unsupervised
$\texttt{ABEL}$ model outperforms both leading supervised and unsupervised
retrievers on the BEIR benchmark. Meanwhile, it exhibits strong adaptation
abilities to tasks and domains that were unseen during training. By either
fine-tuning $\texttt{ABEL}$ on labelled data or integrating it with existing
supervised dense retrievers, we achieve state-of-the-art
results.\footnote{Source code is available at
\url{https://github.com/Fantabulous-J/BootSwitch}.}
Related papers
- Towards Competitive Search Relevance For Inference-Free Learned Sparse Retrievers [6.773411876899064]
inference-free sparse models lag far behind in terms of search relevance when compared to both sparse and dense siamese models.
We propose two different approaches for performance improvement. First, we introduce the IDF-aware FLOPS loss, which introduces Inverted Document Frequency (IDF) to the sparsification of representations.
We find that it mitigates the negative impact of the FLOPS regularization on search relevance, allowing the model to achieve a better balance between accuracy and efficiency.
arXiv Detail & Related papers (2024-11-07T03:46:43Z) - Efficient Long-range Language Modeling with Self-supervised Causal Retrieval [39.24972628990943]
Grouped Cross-Attention is a novel module enabling joint pre-training of the retriever and causal LM.
By integrating top-$k$ retrieval, our model can be pre-trained efficiently from scratch with context lengths up to 64K tokens.
arXiv Detail & Related papers (2024-10-02T15:18:34Z) - Noisy Self-Training with Synthetic Queries for Dense Retrieval [49.49928764695172]
We introduce a novel noisy self-training framework combined with synthetic queries.
Experimental results show that our method improves consistently over existing methods.
Our method is data efficient and outperforms competitive baselines.
arXiv Detail & Related papers (2023-11-27T06:19:50Z) - SPRINT: A Unified Toolkit for Evaluating and Demystifying Zero-shot
Neural Sparse Retrieval [92.27387459751309]
We provide SPRINT, a unified Python toolkit for evaluating neural sparse retrieval.
We establish strong and reproducible zero-shot sparse retrieval baselines across the well-acknowledged benchmark, BEIR.
We show that SPLADEv2 produces sparse representations with a majority of tokens outside of the original query and document.
arXiv Detail & Related papers (2023-07-19T22:48:02Z) - AugTriever: Unsupervised Dense Retrieval and Domain Adaptation by Scalable Data Augmentation [44.93777271276723]
We propose two approaches that enable annotation-free and scalable training by creating pseudo querydocument pairs.
The query extraction method involves selecting salient spans from the original document to generate pseudo queries.
The transferred query generation method utilizes generation models trained for other NLP tasks, such as summarization, to produce pseudo queries.
arXiv Detail & Related papers (2022-12-17T10:43:25Z) - LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text
Retrieval [55.097573036580066]
Experimental results show that LaPraDoR achieves state-of-the-art performance compared with supervised dense retrieval models.
Compared to re-ranking, our lexicon-enhanced approach can be run in milliseconds (22.5x faster) while achieving superior performance.
arXiv Detail & Related papers (2022-03-11T18:53:12Z) - You Only Need One Model for Open-domain Question Answering [26.582284346491686]
Recent works for Open-domain Question Answering refer to an external knowledge base using a retriever model.
We propose casting the retriever and the reranker as hard-attention mechanisms applied sequentially within the transformer architecture.
We evaluate our model on Natural Questions and TriviaQA open datasets and our model outperforms the previous state-of-the-art model by 1.0 and 0.7 exact match scores.
arXiv Detail & Related papers (2021-12-14T13:21:11Z) - Adversarial Retriever-Ranker for dense text retrieval [51.87158529880056]
We present Adversarial Retriever-Ranker (AR2), which consists of a dual-encoder retriever plus a cross-encoder ranker.
AR2 consistently and significantly outperforms existing dense retriever methods.
This includes the improvements on Natural Questions R@5 to 77.9%(+2.1%), TriviaQA R@5 to 78.2%(+1.4), and MS-MARCO MRR@10 to 39.5%(+1.3%)
arXiv Detail & Related papers (2021-10-07T16:41:15Z) - Overcoming Classifier Imbalance for Long-tail Object Detection with
Balanced Group Softmax [88.11979569564427]
We provide the first systematic analysis on the underperformance of state-of-the-art models in front of long-tail distribution.
We propose a novel balanced group softmax (BAGS) module for balancing the classifiers within the detection frameworks through group-wise training.
Extensive experiments on the very recent long-tail large vocabulary object recognition benchmark LVIS show that our proposed BAGS significantly improves the performance of detectors.
arXiv Detail & Related papers (2020-06-18T10:24:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.