Summarize-then-Answer: Generating Concise Explanations for Multi-hop
Reading Comprehension
- URL: http://arxiv.org/abs/2109.06853v1
- Date: Tue, 14 Sep 2021 17:44:34 GMT
- Title: Summarize-then-Answer: Generating Concise Explanations for Multi-hop
Reading Comprehension
- Authors: Naoya Inoue, Harsh Trivedi, Steven Sinha, Niranjan Balasubramanian and
Kentaro Inui
- Abstract summary: We propose to generate a question-focused, abstractive summary of input paragraphs and then feed it to an RC system.
Given a limited amount of human-annotated abstractive explanations, we train the abstractive explainer in a semi-supervised manner.
Experiments demonstrate that the proposed abstractive explainer can generate more compact explanations than an extractive explainer with limited supervision.
- Score: 35.65149154213124
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: How can we generate concise explanations for multi-hop Reading Comprehension
(RC)? The current strategies of identifying supporting sentences can be seen as
an extractive question-focused summarization of the input text. However, these
extractive explanations are not necessarily concise i.e. not minimally
sufficient for answering a question. Instead, we advocate for an abstractive
approach, where we propose to generate a question-focused, abstractive summary
of input paragraphs and then feed it to an RC system. Given a limited amount of
human-annotated abstractive explanations, we train the abstractive explainer in
a semi-supervised manner, where we start from the supervised model and then
train it further through trial and error maximizing a conciseness-promoted
reward function. Our experiments demonstrate that the proposed abstractive
explainer can generate more compact explanations than an extractive explainer
with limited supervision (only 2k instances) while maintaining sufficiency.
Related papers
- HOP, UNION, GENERATE: Explainable Multi-hop Reasoning without Rationale
Supervision [118.0818807474809]
This work proposes a principled, probabilistic approach for training explainable multi-hop QA systems without rationale supervision.
Our approach performs multi-hop reasoning by explicitly modeling rationales as sets, enabling the model to capture interactions between documents and sentences within a document.
arXiv Detail & Related papers (2023-05-23T16:53:49Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - EASE: Extractive-Abstractive Summarization with Explanations [18.046254486733186]
We present an explainable summarization system based on the Information Bottleneck principle.
Inspired by previous research that humans use a two-stage framework to summarize long documents, our framework first extracts a pre-defined amount of evidence spans as explanations.
We show that explanations from our framework are more relevant than simple baselines, without substantially sacrificing the quality of the generated summary.
arXiv Detail & Related papers (2021-05-14T17:45:06Z) - Discrete Reasoning Templates for Natural Language Understanding [79.07883990966077]
We present an approach that reasons about complex questions by decomposing them to simpler subquestions.
We derive the final answer according to instructions in a predefined reasoning template.
We show that our approach is competitive with the state-of-the-art while being interpretable and requires little supervision.
arXiv Detail & Related papers (2021-04-05T18:56:56Z) - ExplanationLP: Abductive Reasoning for Explainable Science Question
Answering [4.726777092009554]
This paper frames question answering as an abductive reasoning problem.
We construct plausible explanations for each choice and then selecting the candidate with the best explanation as the final answer.
Our system, ExplanationLP, elicits explanations by constructing a weighted graph of relevant facts for each candidate answer.
arXiv Detail & Related papers (2020-10-25T14:49:24Z) - Multi-hop Inference for Question-driven Summarization [39.08269647808958]
We propose a novel question-driven abstractive summarization method, Multi-hop Selective Generator (MSG)
MSG incorporates multi-hop reasoning into question-driven summarization and, meanwhile, provide justifications for the generated summaries.
Experimental results show that the proposed method consistently outperforms state-of-the-art methods on two non-factoid QA datasets.
arXiv Detail & Related papers (2020-10-08T02:36:39Z) - Exploring Explainable Selection to Control Abstractive Summarization [51.74889133688111]
We develop a novel framework that focuses on explainability.
A novel pair-wise matrix captures the sentence interactions, centrality, and attribute scores.
A sentence-deployed attention mechanism in the abstractor ensures the final summary emphasizes the desired content.
arXiv Detail & Related papers (2020-04-24T14:39:34Z) - At Which Level Should We Extract? An Empirical Analysis on Extractive
Document Summarization [110.54963847339775]
We show that unnecessity and redundancy issues exist when extracting full sentences.
We propose extracting sub-sentential units based on the constituency parsing tree.
arXiv Detail & Related papers (2020-04-06T13:35:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.