LADER: Log-Augmented DEnse Retrieval for Biomedical Literature Search
- URL: http://arxiv.org/abs/2304.04590v1
- Date: Mon, 10 Apr 2023 13:51:44 GMT
- Title: LADER: Log-Augmented DEnse Retrieval for Biomedical Literature Search
- Authors: Qiao Jin, Andrew Shin, Zhiyong Lu
- Abstract summary: Log-Augmented DEnse Retrieval (LADER) is a simple plug-in module that augments a dense retriever with the click logs retrieved from similar training queries.
LADER finds both similar documents and queries to the given query by a dense retriever.
LADER achieves new state-of-the-art (SOTA) performance on TripClick, a recently released benchmark for biomedical literature retrieval.
- Score: 10.200377742590089
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Queries with similar information needs tend to have similar document clicks,
especially in biomedical literature search engines where queries are generally
short and top documents account for most of the total clicks. Motivated by
this, we present a novel architecture for biomedical literature search, namely
Log-Augmented DEnse Retrieval (LADER), which is a simple plug-in module that
augments a dense retriever with the click logs retrieved from similar training
queries. Specifically, LADER finds both similar documents and queries to the
given query by a dense retriever. Then, LADER scores relevant (clicked)
documents of similar queries weighted by their similarity to the input query.
The final document scores by LADER are the average of (1) the document
similarity scores from the dense retriever and (2) the aggregated document
scores from the click logs of similar queries. Despite its simplicity, LADER
achieves new state-of-the-art (SOTA) performance on TripClick, a recently
released benchmark for biomedical literature retrieval. On the frequent (HEAD)
queries, LADER largely outperforms the best retrieval model by 39% relative
NDCG@10 (0.338 v.s. 0.243). LADER also achieves better performance on the less
frequent (TORSO) queries with 11% relative NDCG@10 improvement over the
previous SOTA (0.303 v.s. 0.272). On the rare (TAIL) queries where similar
queries are scarce, LADER still compares favorably to the previous SOTA method
(NDCG@10: 0.310 v.s. 0.295). On all queries, LADER can improve the performance
of a dense retriever by 24%-37% relative NDCG@10 while not requiring additional
training, and further performance improvement is expected from more logs. Our
regression analysis has shown that queries that are more frequent, have higher
entropy of query similarity and lower entropy of document similarity, tend to
benefit more from log augmentation.
Related papers
- Collapse of Dense Retrievers: Short, Early, and Literal Biases Outranking Factual Evidence [56.09494651178128]
Retrieval models are commonly used in Information Retrieval (IR) applications, such as Retrieval-Augmented Generation (RAG)
We show that retrievers often rely on superficial patterns like over-prioritizing document beginnings, shorter documents, repeated entities, and literal matches.
We show that these biases have direct consequences for downstream applications like RAG, where retrieval-preferred documents can mislead LLMs.
arXiv Detail & Related papers (2025-03-06T23:23:13Z) - Learning More Effective Representations for Dense Retrieval through Deliberate Thinking Before Search [65.53881294642451]
Deliberate Thinking based Dense Retriever (DEBATER)
DEBATER enhances recent dense retrievers by enabling them to learn more effective document representations through a step-by-step thinking process.
Experimental results show that DEBATER significantly outperforms existing methods across several retrieval benchmarks.
arXiv Detail & Related papers (2025-02-18T15:56:34Z) - AdaComp: Extractive Context Compression with Adaptive Predictor for Retrieval-Augmented Large Language Models [15.887617654762629]
Retrieved documents containing noise will hinder RAG from detecting answer clues and make the inference process slow and expensive.
We introduce AdaComp, a low-cost extractive context compression method that adaptively determines the compression rate based on both query complexity and retrieval quality.
arXiv Detail & Related papers (2024-09-03T03:25:59Z) - Optimizing Query Generation for Enhanced Document Retrieval in RAG [53.10369742545479]
Large Language Models (LLMs) excel in various language tasks but they often generate incorrect information.
Retrieval-Augmented Generation (RAG) aims to mitigate this by using document retrieval for accurate responses.
arXiv Detail & Related papers (2024-07-17T05:50:32Z) - BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval [54.54576644403115]
Many complex real-world queries require in-depth reasoning to identify relevant documents.
We introduce BRIGHT, the first text retrieval benchmark that requires intensive reasoning to retrieve relevant documents.
Our dataset consists of 1,384 real-world queries spanning diverse domains, such as economics, psychology, mathematics, and coding.
arXiv Detail & Related papers (2024-07-16T17:58:27Z) - Lexically-Accelerated Dense Retrieval [29.327878974130055]
'LADR' (Lexically-Accelerated Dense Retrieval) is a simple-yet-effective approach that improves the efficiency of existing dense retrieval models.
LADR consistently achieves both precision and recall that are on par with an exhaustive search on standard benchmarks.
arXiv Detail & Related papers (2023-07-31T15:44:26Z) - DAPR: A Benchmark on Document-Aware Passage Retrieval [57.45793782107218]
We propose and name this task emphDocument-Aware Passage Retrieval (DAPR)
While analyzing the errors of the State-of-The-Art (SoTA) passage retrievers, we find the major errors (53.5%) are due to missing document context.
Our created benchmark enables future research on developing and comparing retrieval systems for the new task.
arXiv Detail & Related papers (2023-05-23T10:39:57Z) - Query2doc: Query Expansion with Large Language Models [69.9707552694766]
The proposed method first generates pseudo- documents by few-shot prompting large language models (LLMs)
query2doc boosts the performance of BM25 by 3% to 15% on ad-hoc IR datasets.
Our method also benefits state-of-the-art dense retrievers in terms of both in-domain and out-of-domain results.
arXiv Detail & Related papers (2023-03-14T07:27:30Z) - CAPSTONE: Curriculum Sampling for Dense Retrieval with Document
Expansion [68.19934563919192]
We propose a curriculum sampling strategy that utilizes pseudo queries during training and progressively enhances the relevance between the generated query and the real query.
Experimental results on both in-domain and out-of-domain datasets demonstrate that our approach outperforms previous dense retrieval models.
arXiv Detail & Related papers (2022-12-18T15:57:46Z) - CODER: An efficient framework for improving retrieval through
COntextualized Document Embedding Reranking [11.635294568328625]
We present a framework for improving the performance of a wide class of retrieval models at minimal computational cost.
It utilizes precomputed document representations extracted by a base dense retrieval method.
It incurs a negligible computational overhead on top of any first-stage method at run time, allowing it to be easily combined with any state-of-the-art dense retrieval method.
arXiv Detail & Related papers (2021-12-16T10:25:26Z) - Adversarial Retriever-Ranker for dense text retrieval [51.87158529880056]
We present Adversarial Retriever-Ranker (AR2), which consists of a dual-encoder retriever plus a cross-encoder ranker.
AR2 consistently and significantly outperforms existing dense retriever methods.
This includes the improvements on Natural Questions R@5 to 77.9%(+2.1%), TriviaQA R@5 to 78.2%(+1.4), and MS-MARCO MRR@10 to 39.5%(+1.3%)
arXiv Detail & Related papers (2021-10-07T16:41:15Z) - Improving Query Representations for Dense Retrieval with Pseudo
Relevance Feedback [29.719150565643965]
This paper proposes ANCE-PRF, a new query encoder that uses pseudo relevance feedback (PRF) to improve query representations for dense retrieval.
ANCE-PRF uses a BERT encoder that consumes the query and the top retrieved documents from a dense retrieval model, ANCE, and it learns to produce better query embeddings directly from relevance labels.
Analysis shows that the PRF encoder effectively captures the relevant and complementary information from PRF documents, while ignoring the noise with its learned attention mechanism.
arXiv Detail & Related papers (2021-08-30T18:10:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.