Measuring Association Between Labels and Free-Text Rationales
- URL: http://arxiv.org/abs/2010.12762v4
- Date: Mon, 29 Aug 2022 20:13:18 GMT
- Title: Measuring Association Between Labels and Free-Text Rationales
- Authors: Sarah Wiegreffe, Ana Marasovi\'c, Noah A. Smith
- Abstract summary: In interpretable NLP, we require faithful rationales that reflect the model's decision-making process for an explained instance.
We demonstrate that pipelines, existing models for faithful extractive rationalization on information-extraction style tasks, do not extend as reliably to "reasoning" tasks requiring free-text rationales.
We turn to models that jointly predict and rationalize, a class of widely used high-performance models for free-text rationalization whose faithfulness is not yet established.
- Score: 60.58672852655487
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In interpretable NLP, we require faithful rationales that reflect the model's
decision-making process for an explained instance. While prior work focuses on
extractive rationales (a subset of the input words), we investigate their
less-studied counterpart: free-text natural language rationales. We demonstrate
that pipelines, existing models for faithful extractive rationalization on
information-extraction style tasks, do not extend as reliably to "reasoning"
tasks requiring free-text rationales. We turn to models that jointly predict
and rationalize, a class of widely used high-performance models for free-text
rationalization whose faithfulness is not yet established. We define
label-rationale association as a necessary property for faithfulness: the
internal mechanisms of the model producing the label and the rationale must be
meaningfully correlated. We propose two measurements to test this property:
robustness equivalence and feature importance agreement. We find that
state-of-the-art T5-based joint models exhibit both properties for
rationalizing commonsense question-answering and natural language inference,
indicating their potential for producing faithful free-text rationales.
Related papers
- RORA: Robust Free-Text Rationale Evaluation [52.98000150242775]
We propose RORA, a Robust free-text Rationale evaluation against label leakage.
RORA consistently outperforms existing approaches in evaluating human-written, synthetic, or model-generated rationales.
We also show that RORA aligns well with human judgment, providing a more reliable and accurate measurement across diverse free-text rationales.
arXiv Detail & Related papers (2024-02-28T19:46:21Z) - AURA: Natural Language Reasoning for Aleatoric Uncertainty in Rationales [0.0]
Rationales behind answers not only explain model decisions but boost language models to reason well on complex reasoning tasks.
It is non-trivial to estimate the degree to which the rationales are faithful enough to encourage model performance.
We propose how to deal with imperfect rationales causing aleatoric uncertainty.
arXiv Detail & Related papers (2024-02-22T07:12:34Z) - Think Rationally about What You See: Continuous Rationale Extraction for
Relation Extraction [86.90265683679469]
Relation extraction aims to extract potential relations according to the context of two entities.
We propose a novel rationale extraction framework named RE2, which leverages two continuity and sparsity factors.
Experiments on four datasets show that RE2 surpasses baselines.
arXiv Detail & Related papers (2023-05-02T03:52:34Z) - Semantic Role Labeling Meets Definition Modeling: Using Natural Language
to Describe Predicate-Argument Structures [104.32063681736349]
We present an approach to describe predicate-argument structures using natural language definitions instead of discrete labels.
Our experiments and analyses on PropBank-style and FrameNet-style, dependency-based and span-based SRL also demonstrate that a flexible model with an interpretable output does not necessarily come at the expense of performance.
arXiv Detail & Related papers (2022-12-02T11:19:16Z) - Does Self-Rationalization Improve Robustness to Spurious Correlations? [19.553357015260687]
We ask whether training models to self-rationalize can aid in their learning to solve tasks for the right reasons.
We evaluate robustness to spurious correlations in fine-tuned encoder-decoder and decoder-only models of six different sizes.
We find that while self-rationalization can improve robustness to spurious correlations in low-resource settings, it tends to hurt robustness in higher-resource settings.
arXiv Detail & Related papers (2022-10-24T19:54:57Z) - FRAME: Evaluating Simulatability Metrics for Free-Text Rationales [26.58948555913936]
Free-text rationales aim to explain neural language model (LM) behavior more flexibly and intuitively via natural language.
To ensure rationale quality, it is important to have metrics for measuring rationales' faithfulness and plausibility.
We propose FRAME, a framework for evaluating free-text rationale simulatability metrics.
arXiv Detail & Related papers (2022-07-02T09:25:29Z) - Rationale-Augmented Ensembles in Language Models [53.45015291520658]
We reconsider rationale-augmented prompting for few-shot in-context learning.
We identify rationale sampling in the output space as the key component to robustly improve performance.
We demonstrate that rationale-augmented ensembles achieve more accurate and interpretable results than existing prompting approaches.
arXiv Detail & Related papers (2022-07-02T06:20:57Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Can Rationalization Improve Robustness? [39.741059642044874]
We investigate whether neural NLP models can provide robustness to adversarial attacks in addition to their interpretable nature.
We generate various types of 'AddText' attacks for both token and sentence-level rationalization tasks.
Our experiments reveal that the rationale models show the promise to improve robustness, while they struggle in certain scenarios.
arXiv Detail & Related papers (2022-04-25T17:02:42Z) - SPECTRA: Sparse Structured Text Rationalization [0.0]
We present a unified framework for deterministic extraction of structured explanations via constrained inference on a factor graph.
Our approach greatly eases training and rationale regularization, generally outperforming previous work on plausibility extracted explanations.
arXiv Detail & Related papers (2021-09-09T20:39:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.