Strong and Efficient Baselines for Open Domain Conversational Question
Answering
- URL: http://arxiv.org/abs/2310.14708v1
- Date: Mon, 23 Oct 2023 08:48:14 GMT
- Title: Strong and Efficient Baselines for Open Domain Conversational Question
Answering
- Authors: Andrei C. Coman, Gianni Barlacchi, Adri\`a de Gispert
- Abstract summary: We study the State-of-the-Art (SotA) Dense Passage Retrieval (DPR) retriever and Fusion-in-Decoder (FiD) reader pipeline.
We propose and evaluate strong yet simple and efficient baselines, by introducing a fast reranking component between the retriever and the reader.
Experiments on two ODConvQA tasks, namely TopiOCQA and OR-QuAC, show that our method improves the SotA results, while reducing reader's latency by 60%.
- Score: 2.773656427800412
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unlike the Open Domain Question Answering (ODQA) setting, the conversational
(ODConvQA) domain has received limited attention when it comes to reevaluating
baselines for both efficiency and effectiveness. In this paper, we study the
State-of-the-Art (SotA) Dense Passage Retrieval (DPR) retriever and
Fusion-in-Decoder (FiD) reader pipeline, and show that it significantly
underperforms when applied to ODConvQA tasks due to various limitations. We
then propose and evaluate strong yet simple and efficient baselines, by
introducing a fast reranking component between the retriever and the reader,
and by performing targeted finetuning steps. Experiments on two ODConvQA tasks,
namely TopiOCQA and OR-QuAC, show that our method improves the SotA results,
while reducing reader's latency by 60%. Finally, we provide new and valuable
insights into the development of challenging baselines that serve as a
reference for future, more intricate approaches, including those that leverage
Large Language Models (LLMs).
Related papers
- Toward Optimal Search and Retrieval for RAG [39.69494982983534]
Retrieval-augmented generation (RAG) is a promising method for addressing some of the memory-related challenges associated with Large Language Models (LLMs)
Here, we work towards the goal of understanding how retrievers can be optimized for RAG pipelines for common tasks such as Question Answering (QA)
arXiv Detail & Related papers (2024-11-11T22:06:51Z) - RAG-QA Arena: Evaluating Domain Robustness for Long-form Retrieval Augmented Question Answering [61.19126689470398]
Long-form RobustQA (LFRQA) is a new dataset covering 26K queries and large corpora across seven different domains.
We show via experiments that RAG-QA Arena and human judgments on answer quality are highly correlated.
Only 41.3% of the most competitive LLM's answers are preferred to LFRQA's answers, demonstrating RAG-QA Arena as a challenging evaluation platform for future research.
arXiv Detail & Related papers (2024-07-19T03:02:51Z) - Conv-CoA: Improving Open-domain Question Answering in Large Language Models via Conversational Chain-of-Action [17.60243337898751]
We present a Conversational Chain-of-Action (Conv-CoA) framework for Open-domain Conversational Question Answering (OCQA)
Compared with literature, Conv-CoA addresses three major challenges: (i) unfaithful hallucination that is inconsistent with real-time or domain facts, (ii) weak reasoning performance in conversational scenarios, and (iii) unsatisfying performance in conversational information retrieval.
arXiv Detail & Related papers (2024-05-28T04:46:52Z) - RADE: Reference-Assisted Dialogue Evaluation for Open-Domain Dialogue [37.82954848948347]
We propose the Reference-Assisted Dialogue Evaluation (RADE) approach under the multi-task learning framework.
RADE explicitly compares reference and the candidate response to predict their overall scores.
Experiments on our three datasets and two existing benchmarks demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2023-09-15T04:47:19Z) - Building Interpretable and Reliable Open Information Retriever for New
Domains Overnight [67.03842581848299]
Information retrieval is a critical component for many down-stream tasks such as open-domain question answering (QA)
We propose an information retrieval pipeline that uses entity/event linking model and query decomposition model to focus more accurately on different information units of the query.
We show that, while being more interpretable and reliable, our proposed pipeline significantly improves passage coverages and denotation accuracies across five IR and QA benchmarks.
arXiv Detail & Related papers (2023-08-09T07:47:17Z) - Phrase Retrieval for Open-Domain Conversational Question Answering with
Conversational Dependency Modeling via Contrastive Learning [54.55643652781891]
Open-Domain Conversational Question Answering (ODConvQA) aims at answering questions through a multi-turn conversation.
We propose a method to directly predict answers with a phrase retrieval scheme for a sequence of words.
arXiv Detail & Related papers (2023-06-07T09:46:38Z) - A Survey for Efficient Open Domain Question Answering [51.67110249787223]
Open domain question answering (ODQA) is a longstanding task aimed at answering factual questions from a large knowledge corpus without any explicit evidence in natural language processing (NLP)
arXiv Detail & Related papers (2022-11-15T04:18:53Z) - ReAct: Temporal Action Detection with Relational Queries [84.76646044604055]
This work aims at advancing temporal action detection (TAD) using an encoder-decoder framework with action queries.
We first propose a relational attention mechanism in the decoder, which guides the attention among queries based on their relations.
Lastly, we propose to predict the localization quality of each action query at inference in order to distinguish high-quality queries.
arXiv Detail & Related papers (2022-07-14T17:46:37Z) - Multifaceted Improvements for Conversational Open-Domain Question
Answering [54.913313912927045]
We propose a framework with Multifaceted Improvements for Conversational open-domain Question Answering (MICQA)
Firstly, the proposed KL-divergence based regularization is able to lead to a better question understanding for retrieval and answer reading.
Second, the added post-ranker module can push more relevant passages to the top placements and be selected for reader with a two-aspect constrains.
Third, the well designed curriculum learning strategy effectively narrows the gap between the golden passage settings of training and inference, and encourages the reader to find true answer without the golden passage assistance.
arXiv Detail & Related papers (2022-04-01T07:54:27Z) - Joint Answering and Explanation for Visual Commonsense Reasoning [46.44588492897933]
Visual Commonsense Reasoning endeavors to pursue a more high-level visual comprehension.
It is composed of two indispensable processes: question answering over a given image and rationale inference for answer explanation.
We present a plug-and-play knowledge distillation enhanced framework to couple the question answering and rationale inference processes.
arXiv Detail & Related papers (2022-02-25T11:26:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.