Building Interpretable and Reliable Open Information Retriever for New
Domains Overnight
- URL: http://arxiv.org/abs/2308.04756v1
- Date: Wed, 9 Aug 2023 07:47:17 GMT
- Title: Building Interpretable and Reliable Open Information Retriever for New
Domains Overnight
- Authors: Xiaodong Yu, Ben Zhou, Dan Roth
- Abstract summary: Information retrieval is a critical component for many down-stream tasks such as open-domain question answering (QA)
We propose an information retrieval pipeline that uses entity/event linking model and query decomposition model to focus more accurately on different information units of the query.
We show that, while being more interpretable and reliable, our proposed pipeline significantly improves passage coverages and denotation accuracies across five IR and QA benchmarks.
- Score: 67.03842581848299
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Information retrieval (IR) or knowledge retrieval, is a critical component
for many down-stream tasks such as open-domain question answering (QA). It is
also very challenging, as it requires succinctness, completeness, and
correctness. In recent works, dense retrieval models have achieved
state-of-the-art (SOTA) performance on in-domain IR and QA benchmarks by
representing queries and knowledge passages with dense vectors and learning the
lexical and semantic similarity. However, using single dense vectors and
end-to-end supervision are not always optimal because queries may require
attention to multiple aspects and event implicit knowledge. In this work, we
propose an information retrieval pipeline that uses entity/event linking model
and query decomposition model to focus more accurately on different information
units of the query. We show that, while being more interpretable and reliable,
our proposed pipeline significantly improves passage coverages and denotation
accuracies across five IR and QA benchmarks. It will be the go-to system to use
for applications that need to perform IR on a new domain without much dedicated
effort, because of its superior interpretability and cross-domain performance.
Related papers
- Improving Retrieval in Sponsored Search by Leveraging Query Context Signals [6.152499434499752]
We propose an approach to enhance query understanding by augmenting queries with rich contextual signals.
We use web search titles and snippets to ground queries in real-world information and utilize GPT-4 to generate query rewrites and explanations.
Our context-aware approach substantially outperforms context-free models.
arXiv Detail & Related papers (2024-07-19T14:28:53Z) - CoIR: A Comprehensive Benchmark for Code Information Retrieval Models [56.691926887209895]
We present textbfname (textbfInformation textbfRetrieval Benchmark), a robust and comprehensive benchmark specifically designed to assess code retrieval capabilities.
name comprises textbften meticulously curated code datasets, spanning textbfeight distinctive retrieval tasks across textbfseven diverse domains.
We evaluate nine widely used retrieval models using name, uncovering significant difficulties in performing code retrieval tasks even with state-of-the-art systems.
arXiv Detail & Related papers (2024-07-03T07:58:20Z) - DEXTER: A Benchmark for open-domain Complex Question Answering using LLMs [3.24692739098077]
Open-domain complex Question Answering (QA) is a difficult task with challenges in evidence retrieval and reasoning.
We evaluate state-of-the-art pre-trained dense and sparse retrieval models in an open-domain setting.
We observe that late interaction models and surprisingly lexical models like BM25 perform well compared to other pre-trained dense retrieval models.
arXiv Detail & Related papers (2024-06-24T22:09:50Z) - Prompt-fused framework for Inductive Logical Query Answering [31.736934787328156]
We propose a query-aware prompt-fused framework named Pro-QE.
We show that our model successfully handles the issue of unseen entities in logical queries.
arXiv Detail & Related papers (2024-03-19T11:30:30Z) - REAR: A Relevance-Aware Retrieval-Augmented Framework for Open-Domain Question Answering [115.72130322143275]
REAR is a RElevance-Aware Retrieval-augmented approach for open-domain question answering (QA)
We develop a novel architecture for LLM-based RAG systems, by incorporating a specially designed assessment module.
Experiments on four open-domain QA tasks show that REAR significantly outperforms previous a number of competitive RAG approaches.
arXiv Detail & Related papers (2024-02-27T13:22:51Z) - Spatial-Temporal Graph Enhanced DETR Towards Multi-Frame 3D Object Detection [54.041049052843604]
We present STEMD, a novel end-to-end framework that enhances the DETR-like paradigm for multi-frame 3D object detection.
First, to model the inter-object spatial interaction and complex temporal dependencies, we introduce the spatial-temporal graph attention network.
Finally, it poses a challenge for the network to distinguish between the positive query and other highly similar queries that are not the best match.
arXiv Detail & Related papers (2023-07-01T13:53:14Z) - Neural Methods for Effective, Efficient, and Exposure-Aware Information
Retrieval [7.3371176873092585]
We present novel neural architectures and methods motivated by the specific needs and challenges of information retrieval.
In many real-life IR tasks, the retrieval involves extremely large collections--such as the document index of a commercial Web search engine--containing billions of documents.
arXiv Detail & Related papers (2020-12-21T21:20:16Z) - Generation-Augmented Retrieval for Open-domain Question Answering [134.27768711201202]
Generation-Augmented Retrieval (GAR) for answering open-domain questions.
We show that generating diverse contexts for a query is beneficial as fusing their results consistently yields better retrieval accuracy.
GAR achieves state-of-the-art performance on Natural Questions and TriviaQA datasets under the extractive QA setup when equipped with an extractive reader.
arXiv Detail & Related papers (2020-09-17T23:08:01Z) - KILT: a Benchmark for Knowledge Intensive Language Tasks [102.33046195554886]
We present a benchmark for knowledge-intensive language tasks (KILT)
All tasks in KILT are grounded in the same snapshot of Wikipedia.
We find that a shared dense vector index coupled with a seq2seq model is a strong baseline.
arXiv Detail & Related papers (2020-09-04T15:32:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.