Exploring Automatically Perturbed Natural Language Explanations in
Relation Extraction
- URL: http://arxiv.org/abs/2305.15520v1
- Date: Wed, 24 May 2023 19:17:13 GMT
- Title: Exploring Automatically Perturbed Natural Language Explanations in
Relation Extraction
- Authors: Wanyun Cui, Xingran Chen
- Abstract summary: We find that corrupted explanations with diminished inductive biases can achieve competitive or superior performance compared to the original explanations.
Our findings furnish novel insights into the characteristics of natural language explanations.
- Score: 20.02647320786556
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Previous research has demonstrated that natural language explanations provide
valuable inductive biases that guide models, thereby improving the
generalization ability and data efficiency. In this paper, we undertake a
systematic examination of the effectiveness of these explanations. Remarkably,
we find that corrupted explanations with diminished inductive biases can
achieve competitive or superior performance compared to the original
explanations. Our findings furnish novel insights into the characteristics of
natural language explanations in the following ways: (1) the impact of
explanations varies across different training styles and datasets, with
previously believed improvements primarily observed in frozen language models.
(2) While previous research has attributed the effect of explanations solely to
their inductive biases, our study shows that the effect persists even when the
explanations are completely corrupted. We propose that the main effect is due
to the provision of additional context space. (3) Utilizing the proposed
automatic perturbed context, we were able to attain comparable results to
annotated explanations, but with a significant increase in computational
efficiency, 20-30 times faster.
Related papers
- FLamE: Few-shot Learning from Natural Language Explanations [12.496665033682202]
We present FLamE, a framework for learning from natural language explanations.
Experiments on natural language inference demonstrate effectiveness over strong baselines.
Human evaluation surprisingly reveals that the majority of generated explanations does not adequately justify classification decisions.
arXiv Detail & Related papers (2023-06-13T18:01:46Z) - Abductive Commonsense Reasoning Exploiting Mutually Exclusive
Explanations [118.0818807474809]
Abductive reasoning aims to find plausible explanations for an event.
Existing approaches for abductive reasoning in natural language processing often rely on manually generated annotations for supervision.
This work proposes an approach for abductive commonsense reasoning that exploits the fact that only a subset of explanations is correct for a given context.
arXiv Detail & Related papers (2023-05-24T01:35:10Z) - Complementary Explanations for Effective In-Context Learning [77.83124315634386]
Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in prompts.
This work aims to better understand the mechanisms by which explanations are used for in-context learning.
arXiv Detail & Related papers (2022-11-25T04:40:47Z) - The Unreliability of Explanations in Few-Shot In-Context Learning [50.77996380021221]
We focus on two NLP tasks that involve reasoning over text, namely question answering and natural language inference.
We show that explanations judged as good by humans--those that are logically consistent with the input--usually indicate more accurate predictions.
We present a framework for calibrating model predictions based on the reliability of the explanations.
arXiv Detail & Related papers (2022-05-06T17:57:58Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Are Training Resources Insufficient? Predict First Then Explain! [54.184609286094044]
We argue that the predict-then-explain (PtE) architecture is a more efficient approach in terms of the modelling perspective.
We show that the PtE structure is the most data-efficient approach when explanation data are lacking.
arXiv Detail & Related papers (2021-08-29T07:04:50Z) - Reflective-Net: Learning from Explanations [3.6245632117657816]
This work provides the first steps toward mimicking this process by capitalizing on the explanations generated based on existing explanation methods, i.e. Grad-CAM.
Learning from explanations combined with conventional labeled data yields significant improvements for classification in terms of accuracy and training time.
arXiv Detail & Related papers (2020-11-27T20:40:45Z) - Towards Interpretable Natural Language Understanding with Explanations
as Latent Variables [146.83882632854485]
We develop a framework for interpretable natural language understanding that requires only a small set of human annotated explanations for training.
Our framework treats natural language explanations as latent variables that model the underlying reasoning process of a neural model.
arXiv Detail & Related papers (2020-10-24T02:05:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.