Open-Retrieval Conversational Machine Reading
- URL: http://arxiv.org/abs/2102.08633v1
- Date: Wed, 17 Feb 2021 08:55:01 GMT
- Title: Open-Retrieval Conversational Machine Reading
- Authors: Yifan Gao, Jingjing Li, Michael R. Lyu, Irwin King
- Abstract summary: In conversational machine reading, systems need to interpret natural language rules, answer high-level questions, and ask follow-up clarification questions.
Existing works assume the rule text is provided for each user question, which neglects the essential retrieval step in real scenarios.
In this work, we propose and investigate an open-retrieval setting of conversational machine reading.
- Score: 80.13988353794586
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In conversational machine reading, systems need to interpret natural language
rules, answer high-level questions such as "May I qualify for VA health care
benefits?", and ask follow-up clarification questions whose answer is necessary
to answer the original question. However, existing works assume the rule text
is provided for each user question, which neglects the essential retrieval step
in real scenarios. In this work, we propose and investigate an open-retrieval
setting of conversational machine reading. In the open-retrieval setting, the
relevant rule texts are unknown so that a system needs to retrieve
question-relevant evidence from a collection of rule texts, and answer users'
high-level questions according to multiple retrieved rule texts in a
conversational manner. We propose MUDERN, a Multi-passage Discourse-aware
Entailment Reasoning Network which extracts conditions in the rule texts
through discourse segmentation, conducts multi-passage entailment reasoning to
answer user questions directly, or asks clarification follow-up questions to
inquiry more information. On our created OR-ShARC dataset, MUDERN achieves the
state-of-the-art performance, outperforming existing single-passage
conversational machine reading models as well as a new multi-passage
conversational machine reading baseline by a large margin. In addition, we
conduct in-depth analyses to provide new insights into this new setting and our
model.
Related papers
- Conversational QA Dataset Generation with Answer Revision [2.5838973036257458]
We introduce a novel framework that extracts question-worthy phrases from a passage and then generates corresponding questions considering previous conversations.
Our framework revises the extracted answers after generating questions so that answers exactly match paired questions.
arXiv Detail & Related papers (2022-09-23T04:05:38Z) - Evaluating Mixed-initiative Conversational Search Systems via User
Simulation [9.066817876491053]
We propose a conversational User Simulator, called USi, for automatic evaluation of such search systems.
We show that responses generated by USi are both inline with the underlying information need and comparable to human-generated answers.
arXiv Detail & Related papers (2022-04-17T16:27:33Z) - BERT-CoQAC: BERT-based Conversational Question Answering in Context [10.811729691130349]
We introduce a framework based on a publically available pre-trained language model called BERT for incorporating history turns into the system.
Experiment results revealed that our framework is comparable in performance with the state-of-the-art models on the QuAC leader board.
arXiv Detail & Related papers (2021-04-23T03:05:17Z) - Towards Data Distillation for End-to-end Spoken Conversational Question
Answering [65.124088336738]
We propose a new Spoken Conversational Question Answering task (SCQA)
SCQA aims at enabling QA systems to model complex dialogues flow given the speech utterances and text corpora.
Our main objective is to build a QA system to deal with conversational questions both in spoken and text forms.
arXiv Detail & Related papers (2020-10-18T05:53:39Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z) - Guided Transformer: Leveraging Multiple External Sources for
Representation Learning in Conversational Search [36.64582291809485]
Asking clarifying questions in response to ambiguous or faceted queries has been recognized as a useful technique for various information retrieval systems.
In this paper, we enrich the representations learned by Transformer networks using a novel attention mechanism from external information sources.
Our experiments use a public dataset for search clarification and demonstrate significant improvements compared to competitive baselines.
arXiv Detail & Related papers (2020-06-13T03:24:53Z) - Explicit Memory Tracker with Coarse-to-Fine Reasoning for Conversational
Machine Reading [177.50355465392047]
We present a new framework of conversational machine reading that comprises a novel Explicit Memory Tracker (EMT)
Our framework generates clarification questions by adopting a coarse-to-fine reasoning strategy.
EMT achieves new state-of-the-art results of 74.6% micro-averaged decision accuracy and 49.5 BLEU4.
arXiv Detail & Related papers (2020-05-26T02:21:31Z) - Open-Retrieval Conversational Question Answering [62.11228261293487]
We introduce an open-retrieval conversational question answering (ORConvQA) setting, where we learn to retrieve evidence from a large collection before extracting answers.
We build an end-to-end system for ORConvQA, featuring a retriever, a reranker, and a reader that are all based on Transformers.
arXiv Detail & Related papers (2020-05-22T19:39:50Z) - Multi-Stage Conversational Passage Retrieval: An Approach to Fusing Term
Importance Estimation and Neural Query Rewriting [56.268862325167575]
We tackle conversational passage retrieval (ConvPR) with query reformulation integrated into a multi-stage ad-hoc IR system.
We propose two conversational query reformulation (CQR) methods: (1) term importance estimation and (2) neural query rewriting.
For the former, we expand conversational queries using important terms extracted from the conversational context with frequency-based signals.
For the latter, we reformulate conversational queries into natural, standalone, human-understandable queries with a pretrained sequence-tosequence model.
arXiv Detail & Related papers (2020-05-05T14:30:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.