Improving Neural Model Performance through Natural Language Feedback on
Their Explanations
- URL: http://arxiv.org/abs/2104.08765v1
- Date: Sun, 18 Apr 2021 08:10:01 GMT
- Title: Improving Neural Model Performance through Natural Language Feedback on
Their Explanations
- Authors: Aman Madaan, Niket Tandon, Dheeraj Rajagopal, Yiming Yang, Peter
Clark, Keisuke Sakaguchi, Ed Hovy
- Abstract summary: We introduce MERCURIE - an interactive system that refines its explanations for a given reasoning task by getting human feedback in natural language.
Our approach generates graphs that 40% have fewer inconsistencies as compared with the off-the-shelf system.
- Score: 38.96890526935312
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A class of explainable NLP models for reasoning tasks support their decisions
by generating free-form or structured explanations, but what happens when these
supporting structures contain errors? Our goal is to allow users to
interactively correct explanation structures through natural language feedback.
We introduce MERCURIE - an interactive system that refines its explanations for
a given reasoning task by getting human feedback in natural language. Our
approach generates graphs that have 40% fewer inconsistencies as compared with
the off-the-shelf system. Further, simply appending the corrected explanation
structures to the output leads to a gain of 1.2 points on accuracy on
defeasible reasoning across all three domains. We release a dataset of over
450k graphs for defeasible reasoning generated by our system at
https://tinyurl.com/mercurie .
Related papers
- Towards More Faithful Natural Language Explanation Using Multi-Level
Contrastive Learning in VQA [7.141288053123662]
Natural language explanation in visual question answer (VQA-NLE) aims to explain the decision-making process of models by generating natural language sentences to increase users' trust in the black-box systems.
Existing post-hoc explanations are not always aligned with human logical inference, suffering from the issues on: 1) Deductive unsatisfiability, the generated explanations do not logically lead to the answer; 2) Factual inconsistency, the model falsifies its counterfactual explanation for answers without considering the facts in images; and 3) Semantic perturbation insensitivity, the model can not recognize the semantic changes caused by small perturbations
arXiv Detail & Related papers (2023-12-21T05:51:55Z) - S3C: Semi-Supervised VQA Natural Language Explanation via Self-Critical
Learning [46.787034512390434]
VQA Natural Language Explanation (VQA-NLE) task aims to explain the decision-making process of VQA models in natural language.
We propose a new Semi-Supervised VQA-NLE via Self-Critical Learning (S3C)
S3C evaluates the candidate explanations by answering rewards to improve the logical consistency between answers and rationales.
arXiv Detail & Related papers (2023-09-05T11:47:51Z) - Dynamic Clue Bottlenecks: Towards Interpretable-by-Design Visual Question Answering [58.64831511644917]
We introduce an interpretable by design model that factors model decisions into intermediate human-legible explanations.
We show that our inherently interpretable system can improve 4.64% over a comparable black-box system in reasoning-focused questions.
arXiv Detail & Related papers (2023-05-24T08:33:15Z) - The Unreliability of Explanations in Few-Shot In-Context Learning [50.77996380021221]
We focus on two NLP tasks that involve reasoning over text, namely question answering and natural language inference.
We show that explanations judged as good by humans--those that are logically consistent with the input--usually indicate more accurate predictions.
We present a framework for calibrating model predictions based on the reliability of the explanations.
arXiv Detail & Related papers (2022-05-06T17:57:58Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z) - ExplanationLP: Abductive Reasoning for Explainable Science Question
Answering [4.726777092009554]
This paper frames question answering as an abductive reasoning problem.
We construct plausible explanations for each choice and then selecting the candidate with the best explanation as the final answer.
Our system, ExplanationLP, elicits explanations by constructing a weighted graph of relevant facts for each candidate answer.
arXiv Detail & Related papers (2020-10-25T14:49:24Z) - Towards Interpretable Natural Language Understanding with Explanations
as Latent Variables [146.83882632854485]
We develop a framework for interpretable natural language understanding that requires only a small set of human annotated explanations for training.
Our framework treats natural language explanations as latent variables that model the underlying reasoning process of a neural model.
arXiv Detail & Related papers (2020-10-24T02:05:56Z) - Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial
Explanations of Their Behavior in Natural Language? [86.60613602337246]
We introduce a leakage-adjusted simulatability (LAS) metric for evaluating NL explanations.
LAS measures how well explanations help an observer predict a model's output, while controlling for how explanations can directly leak the output.
We frame explanation generation as a multi-agent game and optimize explanations for simulatability while penalizing label leakage.
arXiv Detail & Related papers (2020-10-08T16:59:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.