ExplanationLP: Abductive Reasoning for Explainable Science Question
Answering
- URL: http://arxiv.org/abs/2010.13128v1
- Date: Sun, 25 Oct 2020 14:49:24 GMT
- Title: ExplanationLP: Abductive Reasoning for Explainable Science Question
Answering
- Authors: Mokanarangan Thayaparan, Marco Valentino, Andr\'e Freitas
- Abstract summary: This paper frames question answering as an abductive reasoning problem.
We construct plausible explanations for each choice and then selecting the candidate with the best explanation as the final answer.
Our system, ExplanationLP, elicits explanations by constructing a weighted graph of relevant facts for each candidate answer.
- Score: 4.726777092009554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel approach for answering and explaining multiple-choice
science questions by reasoning on grounding and abstract inference chains. This
paper frames question answering as an abductive reasoning problem, constructing
plausible explanations for each choice and then selecting the candidate with
the best explanation as the final answer. Our system, ExplanationLP, elicits
explanations by constructing a weighted graph of relevant facts for each
candidate answer and extracting the facts that satisfy certain structural and
semantic constraints. To extract the explanations, we employ a linear
programming formalism designed to select the optimal subgraph. The graphs'
weighting function is composed of a set of parameters, which we fine-tune to
optimize answer selection performance. We carry out our experiments on the
WorldTree and ARC-Challenge corpus to empirically demonstrate the following
conclusions: (1) Grounding-Abstract inference chains provides the semantic
control to perform explainable abductive reasoning (2) Efficiency and
robustness in learning with a fewer number of parameters by outperforming
contemporary explainable and transformer-based approaches in a similar setting
(3) Generalisability by outperforming SOTA explainable approaches on general
science question sets.
Related papers
- Leveraging Structured Information for Explainable Multi-hop Question
Answering and Reasoning [14.219239732584368]
In this work, we investigate constructing and leveraging extracted semantic structures (graphs) for multi-hop question answering.
Empirical results and human evaluations show that our framework: generates more faithful reasoning chains and substantially improves the QA performance on two benchmark datasets.
arXiv Detail & Related papers (2023-11-07T05:32:39Z) - Axiomatic Aggregations of Abductive Explanations [13.277544022717404]
Recent criticisms of robustness of post hoc model approximation explanation methods have led to rise of model-precise abductive explanations.
In such cases, providing a single abductive explanation can be insufficient; on the other hand, providing all valid abductive explanations can be incomprehensible due to their size.
We propose three aggregation methods: two based on power indices from cooperative game theory and a third based on a well-known measure of causal strength.
arXiv Detail & Related papers (2023-09-29T04:06:10Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - Complementary Explanations for Effective In-Context Learning [77.83124315634386]
Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in prompts.
This work aims to better understand the mechanisms by which explanations are used for in-context learning.
arXiv Detail & Related papers (2022-11-25T04:40:47Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - The Unreliability of Explanations in Few-Shot In-Context Learning [50.77996380021221]
We focus on two NLP tasks that involve reasoning over text, namely question answering and natural language inference.
We show that explanations judged as good by humans--those that are logically consistent with the input--usually indicate more accurate predictions.
We present a framework for calibrating model predictions based on the reliability of the explanations.
arXiv Detail & Related papers (2022-05-06T17:57:58Z) - Explanatory Paradigms in Neural Networks [18.32369721322249]
We present a leap-forward expansion to the study of explainability in neural networks by considering explanations as answers to reasoning-based questions.
The answers to these questions are observed correlations, observed counterfactuals, and observed contrastive explanations respectively.
The term observed refers to the specific case of post-hoc explainability, when an explanatory technique explains the decision $P$ after a trained neural network has made the decision $P$.
arXiv Detail & Related papers (2022-02-24T00:22:11Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Ranking Facts for Explaining Answers to Elementary Science Questions [1.4091801425319965]
In elementary science exams, students select one answer from among typically four choices and can explain why they made that particular choice.
We consider the novel task of generating explanations for answers from human-authored facts.
Explanations are created from a human-annotated set of nearly 5,000 candidate facts in the WorldTree corpus.
arXiv Detail & Related papers (2021-10-18T06:15:11Z) - Discrete Reasoning Templates for Natural Language Understanding [79.07883990966077]
We present an approach that reasons about complex questions by decomposing them to simpler subquestions.
We derive the final answer according to instructions in a predefined reasoning template.
We show that our approach is competitive with the state-of-the-art while being interpretable and requires little supervision.
arXiv Detail & Related papers (2021-04-05T18:56:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.