A Graph-guided Multi-round Retrieval Method for Conversational
Open-domain Question Answering
- URL: http://arxiv.org/abs/2104.08443v1
- Date: Sat, 17 Apr 2021 04:39:41 GMT
- Title: A Graph-guided Multi-round Retrieval Method for Conversational
Open-domain Question Answering
- Authors: Yongqi Li, Wenjie Li, Liqiang Nie
- Abstract summary: We propose a novel graph-guided retrieval method to model the relations among answers across conversation turns.
We also propose to incorporate the multi-round relevance feedback technique to explore the impact of the retrieval context on current question understanding.
- Score: 52.041815783025186
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, conversational agents have provided a natural and convenient
access to useful information in people's daily life, along with a broad and new
research topic, conversational question answering (QA). Among the popular
conversational QA tasks, conversational open-domain QA, which requires to
retrieve relevant passages from the Web to extract exact answers, is more
practical but less studied. The main challenge is how to well capture and fully
explore the historical context in conversation to facilitate effective
large-scale retrieval. The current work mainly utilizes history questions to
refine the current question or to enhance its representation, yet the relations
between history answers and the current answer in a conversation, which is also
critical to the task, are totally neglected. To address this problem, we
propose a novel graph-guided retrieval method to model the relations among
answers across conversation turns. In particular, it utilizes a passage graph
derived from the hyperlink-connected passages that contains history answers and
potential current answers, to retrieve more relevant passages for subsequent
answer extraction. Moreover, in order to collect more complementary information
in the historical context, we also propose to incorporate the multi-round
relevance feedback technique to explore the impact of the retrieval context on
current question understanding. Experimental results on the public dataset
verify the effectiveness of our proposed method. Notably, the F1 score is
improved by 5% and 11% with predicted history answers and true history answers,
respectively.
Related papers
- Consistency Training by Synthetic Question Generation for Conversational Question Answering [14.211024633768986]
We augment historical information with synthetic questions to make the reasoning robust to irrelevant history.
This is the first instance of research using question generation as a form of data augmentation to model conversational QA settings.
arXiv Detail & Related papers (2024-04-17T06:49:14Z) - History-Aware Conversational Dense Retrieval [31.203399110612388]
We propose a History-Aware Conversational Dense Retrieval (HAConvDR) system, which incorporates two ideas: context-denoised query reformulation and automatic mining of supervision signals.
Experiments on two public conversational search datasets demonstrate the improved history modeling capability of HAConvDR.
arXiv Detail & Related papers (2024-01-30T01:24:18Z) - Social Commonsense-Guided Search Query Generation for Open-Domain
Knowledge-Powered Conversations [66.16863141262506]
We present a novel approach that focuses on generating internet search queries guided by social commonsense.
Our proposed framework addresses passive user interactions by integrating topic tracking, commonsense response generation and instruction-driven query generation.
arXiv Detail & Related papers (2023-10-22T16:14:56Z) - Learning to Select the Relevant History Turns in Conversational Question
Answering [27.049444003555234]
The dependency between relevant history selection and correct answer prediction is an intriguing but under-explored area.
We propose a framework, DHS-ConvQA, that first generates the context and question entities for all the history turns.
We demonstrate that selecting relevant turns works better than rewriting the original question.
arXiv Detail & Related papers (2023-08-04T12:59:39Z) - Open-Domain Conversational Question Answering with Historical Answers [29.756094955426597]
This paper proposes ConvADR-QA that leverages historical answers to boost retrieval performance.
In our proposed framework, the retrievers use a teacher-student framework to reduce noises from previous turns.
Our experiments on the benchmark dataset, OR-QuAC, demonstrate that our model outperforms existing baselines in both extractive and generative reader settings.
arXiv Detail & Related papers (2022-11-17T08:20:57Z) - Conversational QA Dataset Generation with Answer Revision [2.5838973036257458]
We introduce a novel framework that extracts question-worthy phrases from a passage and then generates corresponding questions considering previous conversations.
Our framework revises the extracted answers after generating questions so that answers exactly match paired questions.
arXiv Detail & Related papers (2022-09-23T04:05:38Z) - Multifaceted Improvements for Conversational Open-Domain Question
Answering [54.913313912927045]
We propose a framework with Multifaceted Improvements for Conversational open-domain Question Answering (MICQA)
Firstly, the proposed KL-divergence based regularization is able to lead to a better question understanding for retrieval and answer reading.
Second, the added post-ranker module can push more relevant passages to the top placements and be selected for reader with a two-aspect constrains.
Third, the well designed curriculum learning strategy effectively narrows the gap between the golden passage settings of training and inference, and encourages the reader to find true answer without the golden passage assistance.
arXiv Detail & Related papers (2022-04-01T07:54:27Z) - A Dataset of Information-Seeking Questions and Answers Anchored in
Research Papers [66.11048565324468]
We present a dataset of 5,049 questions over 1,585 Natural Language Processing papers.
Each question is written by an NLP practitioner who read only the title and abstract of the corresponding paper, and the question seeks information present in the full text.
We find that existing models that do well on other QA tasks do not perform well on answering these questions, underperforming humans by at least 27 F1 points when answering them from entire papers.
arXiv Detail & Related papers (2021-05-07T00:12:34Z) - Open-Retrieval Conversational Question Answering [62.11228261293487]
We introduce an open-retrieval conversational question answering (ORConvQA) setting, where we learn to retrieve evidence from a large collection before extracting answers.
We build an end-to-end system for ORConvQA, featuring a retriever, a reranker, and a reader that are all based on Transformers.
arXiv Detail & Related papers (2020-05-22T19:39:50Z) - Multi-Stage Conversational Passage Retrieval: An Approach to Fusing Term
Importance Estimation and Neural Query Rewriting [56.268862325167575]
We tackle conversational passage retrieval (ConvPR) with query reformulation integrated into a multi-stage ad-hoc IR system.
We propose two conversational query reformulation (CQR) methods: (1) term importance estimation and (2) neural query rewriting.
For the former, we expand conversational queries using important terms extracted from the conversational context with frequency-based signals.
For the latter, we reformulate conversational queries into natural, standalone, human-understandable queries with a pretrained sequence-tosequence model.
arXiv Detail & Related papers (2020-05-05T14:30:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.