HOP, UNION, GENERATE: Explainable Multi-hop Reasoning without Rationale
Supervision
- URL: http://arxiv.org/abs/2305.14237v1
- Date: Tue, 23 May 2023 16:53:49 GMT
- Title: HOP, UNION, GENERATE: Explainable Multi-hop Reasoning without Rationale
Supervision
- Authors: Wenting Zhao and Justin T. Chiu and Claire Cardie and Alexander M.
Rush
- Abstract summary: This work proposes a principled, probabilistic approach for training explainable multi-hop QA systems without rationale supervision.
Our approach performs multi-hop reasoning by explicitly modeling rationales as sets, enabling the model to capture interactions between documents and sentences within a document.
- Score: 118.0818807474809
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable multi-hop question answering (QA) not only predicts answers but
also identifies rationales, i. e. subsets of input sentences used to derive the
answers. This problem has been extensively studied under the supervised
setting, where both answer and rationale annotations are given. Because
rationale annotations are expensive to collect and not always available, recent
efforts have been devoted to developing methods that do not rely on supervision
for rationales. However, such methods have limited capacities in modeling
interactions between sentences, let alone reasoning across multiple documents.
This work proposes a principled, probabilistic approach for training
explainable multi-hop QA systems without rationale supervision. Our approach
performs multi-hop reasoning by explicitly modeling rationales as sets,
enabling the model to capture interactions between documents and sentences
within a document. Experimental results show that our approach is more accurate
at selecting rationales than the previous methods, while maintaining similar
accuracy in predicting answers.
Related papers
- Abductive Commonsense Reasoning Exploiting Mutually Exclusive
Explanations [118.0818807474809]
Abductive reasoning aims to find plausible explanations for an event.
Existing approaches for abductive reasoning in natural language processing often rely on manually generated annotations for supervision.
This work proposes an approach for abductive commonsense reasoning that exploits the fact that only a subset of explanations is correct for a given context.
arXiv Detail & Related papers (2023-05-24T01:35:10Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - Reasoning Circuits: Few-shot Multihop Question Generation with
Structured Rationales [11.068901022944015]
Chain-of-thought rationale generation has been shown to improve performance on multi-step reasoning tasks.
We introduce a new framework for applying chain-of-thought inspired structured rationale generation to multi-hop question generation under a very low supervision regime.
arXiv Detail & Related papers (2022-11-15T19:36:06Z) - Locate Then Ask: Interpretable Stepwise Reasoning for Multi-hop Question
Answering [71.49131159045811]
Multi-hop reasoning requires aggregating multiple documents to answer a complex question.
Existing methods usually decompose the multi-hop question into simpler single-hop questions.
We propose an interpretable stepwise reasoning framework to incorporate both single-hop supporting sentence identification and single-hop question generation.
arXiv Detail & Related papers (2022-08-22T13:24:25Z) - Interlock-Free Multi-Aspect Rationalization for Text Classification [33.33452117387646]
We show that we address the interlocking problem in the multi-aspect setting.
We propose a multi-stage training method incorporating an additional self-supervised contrastive loss.
Empirical results on the beer review dataset show that our method improves significantly the rationalization performance.
arXiv Detail & Related papers (2022-05-13T16:38:38Z) - ReasonBERT: Pre-trained to Reason with Distant Supervision [17.962648165675684]
We present ReasonBert, a pre-training method that augments language models with the ability to reason over long-range relations and multiple, possibly hybrid contexts.
Different types of reasoning are simulated, including intersecting multiple pieces of evidence, bridging from one piece of evidence to another, and detecting unanswerable cases.
arXiv Detail & Related papers (2021-09-10T14:49:44Z) - Robustifying Multi-hop QA through Pseudo-Evidentiality Training [28.584236042324896]
We study the bias problem of multi-hop question answering models, of answering correctly without correct reasoning.
We propose a new approach to learn evidentiality, deciding whether the answer prediction is supported by correct evidences.
arXiv Detail & Related papers (2021-07-07T14:15:14Z) - Prompting Contrastive Explanations for Commonsense Reasoning Tasks [74.7346558082693]
Large pretrained language models (PLMs) can achieve near-human performance on commonsense reasoning tasks.
We show how to use these same models to generate human-interpretable evidence.
arXiv Detail & Related papers (2021-06-12T17:06:13Z) - Generative Context Pair Selection for Multi-hop Question Answering [60.74354009152721]
We propose a generative context selection model for multi-hop question answering.
Our proposed generative passage selection model has a better performance (4.9% higher than baseline) on adversarial held-out set.
arXiv Detail & Related papers (2021-04-18T07:00:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.