Bridging The Gap: Entailment Fused-T5 for Open-retrieval Conversational
Machine Reading Comprehension
- URL: http://arxiv.org/abs/2212.09353v1
- Date: Mon, 19 Dec 2022 10:38:30 GMT
- Title: Bridging The Gap: Entailment Fused-T5 for Open-retrieval Conversational
Machine Reading Comprehension
- Authors: Xiao Zhang, Heyan Huang, Zewen Chi, Xian-Ling Mao
- Abstract summary: Open-retrieval conversational machine reading comprehension simulates real-life conversational interaction scenes.
Recent studies explored the methods to reduce the information gap between decision-making and question generation.
We propose a novel one-stage end-to-end framework, called Entailment Fused-T5 (EFT), to bridge the information gap between decision-making and generation.
- Score: 48.529698533726496
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Open-retrieval conversational machine reading comprehension (OCMRC) simulates
real-life conversational interaction scenes. Machines are required to make a
decision of "Yes/No/Inquire" or generate a follow-up question when the decision
is "Inquire" based on retrieved rule texts, user scenario, user question, and
dialogue history. Recent studies explored the methods to reduce the information
gap between decision-making and question generation and thus improve the
performance of generation. However, the information gap still exists because
these pipeline structures are still limited in decision-making, span
extraction, and question rephrasing three stages. Decision-making and
generation are reasoning separately, and the entailment reasoning utilized in
decision-making is hard to share through all stages. To tackle the above
problem, we proposed a novel one-stage end-to-end framework, called Entailment
Fused-T5 (EFT), to bridge the information gap between decision-making and
generation in a global understanding manner. The extensive experimental results
demonstrate that our proposed framework achieves new state-of-the-art
performance on the OR-ShARC benchmark.
Related papers
- Bridging Context Gaps: Leveraging Coreference Resolution for Long Contextual Understanding [28.191029786204624]
We introduce the Long Question Coreference Adaptation (LQCA) method to enhance the performance of large language models (LLMs)
This framework focuses on coreference resolution tailored to long contexts, allowing the model to identify and manage references effectively.
The framework provides easier-to-handle partitions for LLMs, promoting better understanding.
arXiv Detail & Related papers (2024-10-02T15:39:55Z) - Thread: A Logic-Based Data Organization Paradigm for How-To Question Answering with Retrieval Augmented Generation [49.36436704082436]
How-to questions are integral to decision-making processes and require dynamic, step-by-step answers.
We propose Thread, a novel data organization paradigm aimed at enabling current systems to handle how-to questions more effectively.
arXiv Detail & Related papers (2024-06-19T09:14:41Z) - Clarify When Necessary: Resolving Ambiguity Through Interaction with LMs [58.620269228776294]
We propose a task-agnostic framework for resolving ambiguity by asking users clarifying questions.
We evaluate systems across three NLP applications: question answering, machine translation and natural language inference.
We find that intent-sim is robust, demonstrating improvements across a wide range of NLP tasks and LMs.
arXiv Detail & Related papers (2023-11-16T00:18:50Z) - ET5: A Novel End-to-end Framework for Conversational Machine Reading
Comprehension [48.529698533726496]
We propose an end-to-end framework for conversational machine reading comprehension based on entailment reasoning T5 (ET5)
Despite the lightweight of our proposed framework, experimental results show that the proposed ET5 achieves new state-of-the-art results on the ShARC leaderboard with the BLEU-4 score of 55.2.
arXiv Detail & Related papers (2022-09-23T08:58:03Z) - Smoothing Dialogue States for Open Conversational Machine Reading [70.83783364292438]
We propose an effective gating strategy by smoothing the two dialogue states in only one decoder and bridge decision making and question generation.
Experiments on the OR-ShARC dataset show the effectiveness of our method, which achieves new state-of-the-art results.
arXiv Detail & Related papers (2021-08-28T08:04:28Z) - Local Explanation of Dialogue Response Generation [77.68077106724522]
Local explanation of response generation (LERG) is proposed to gain insights into the reasoning process of a generation model.
LERG views the sequence prediction as uncertainty estimation of a human response and then creates explanations by perturbing the input and calculating the certainty change over the human response.
Our results show that our method consistently improves other widely used methods on proposed automatic- and human- evaluation metrics for this new task by 4.4-12.8%.
arXiv Detail & Related papers (2021-06-11T17:58:36Z) - Explicit Memory Tracker with Coarse-to-Fine Reasoning for Conversational
Machine Reading [177.50355465392047]
We present a new framework of conversational machine reading that comprises a novel Explicit Memory Tracker (EMT)
Our framework generates clarification questions by adopting a coarse-to-fine reasoning strategy.
EMT achieves new state-of-the-art results of 74.6% micro-averaged decision accuracy and 49.5 BLEU4.
arXiv Detail & Related papers (2020-05-26T02:21:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.