Interlock-Free Multi-Aspect Rationalization for Text Classification
- URL: http://arxiv.org/abs/2205.06756v1
- Date: Fri, 13 May 2022 16:38:38 GMT
- Title: Interlock-Free Multi-Aspect Rationalization for Text Classification
- Authors: Shuangqi Li, Diego Antognini, Boi Faltings
- Abstract summary: We show that we address the interlocking problem in the multi-aspect setting.
We propose a multi-stage training method incorporating an additional self-supervised contrastive loss.
Empirical results on the beer review dataset show that our method improves significantly the rationalization performance.
- Score: 33.33452117387646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explanation is important for text classification tasks. One prevalent type of
explanation is rationales, which are text snippets of input text that suffice
to yield the prediction and are meaningful to humans. A lot of research on
rationalization has been based on the selective rationalization framework,
which has recently been shown to be problematic due to the interlocking
dynamics. In this paper, we show that we address the interlocking problem in
the multi-aspect setting, where we aim to generate multiple rationales for
multiple outputs. More specifically, we propose a multi-stage training method
incorporating an additional self-supervised contrastive loss that helps to
generate more semantically diverse rationales. Empirical results on the beer
review dataset show that our method improves significantly the rationalization
performance.
Related papers
- HOP, UNION, GENERATE: Explainable Multi-hop Reasoning without Rationale
Supervision [118.0818807474809]
This work proposes a principled, probabilistic approach for training explainable multi-hop QA systems without rationale supervision.
Our approach performs multi-hop reasoning by explicitly modeling rationales as sets, enabling the model to capture interactions between documents and sentences within a document.
arXiv Detail & Related papers (2023-05-23T16:53:49Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - Reasoning Circuits: Few-shot Multihop Question Generation with
Structured Rationales [11.068901022944015]
Chain-of-thought rationale generation has been shown to improve performance on multi-step reasoning tasks.
We introduce a new framework for applying chain-of-thought inspired structured rationale generation to multi-hop question generation under a very low supervision regime.
arXiv Detail & Related papers (2022-11-15T19:36:06Z) - Rationale-Augmented Ensembles in Language Models [53.45015291520658]
We reconsider rationale-augmented prompting for few-shot in-context learning.
We identify rationale sampling in the output space as the key component to robustly improve performance.
We demonstrate that rationale-augmented ensembles achieve more accurate and interpretable results than existing prompting approaches.
arXiv Detail & Related papers (2022-07-02T06:20:57Z) - Fine-Grained Visual Entailment [51.66881737644983]
We propose an extension of this task, where the goal is to predict the logical relationship of fine-grained knowledge elements within a piece of text to an image.
Unlike prior work, our method is inherently explainable and makes logical predictions at different levels of granularity.
We evaluate our method on a new dataset of manually annotated knowledge elements and show that our method achieves 68.18% accuracy at this challenging task.
arXiv Detail & Related papers (2022-03-29T16:09:38Z) - SPECTRA: Sparse Structured Text Rationalization [0.0]
We present a unified framework for deterministic extraction of structured explanations via constrained inference on a factor graph.
Our approach greatly eases training and rationale regularization, generally outperforming previous work on plausibility extracted explanations.
arXiv Detail & Related papers (2021-09-09T20:39:56Z) - Distribution Matching for Rationalization [30.54889533406428]
rationalization aims to extract pieces of input text as rationales to justify neural network predictions on text classification tasks.
We propose a novel rationalization method that matches the distributions of rationales and input text in both the feature space and output space.
arXiv Detail & Related papers (2021-06-01T08:49:32Z) - Variable Instance-Level Explainability for Text Classification [9.147707153504117]
We propose a method for extracting variable-length explanations using a set of different feature scoring methods at instance-level.
Our method consistently provides more faithful explanations compared to previous fixed-length and fixed-feature scoring methods for rationale extraction.
arXiv Detail & Related papers (2021-04-16T16:53:48Z) - Narrative Incoherence Detection [76.43894977558811]
We propose the task of narrative incoherence detection as a new arena for inter-sentential semantic understanding.
Given a multi-sentence narrative, decide whether there exist any semantic discrepancies in the narrative flow.
arXiv Detail & Related papers (2020-12-21T07:18:08Z) - Invariant Rationalization [84.1861516092232]
A typical rationalization criterion, i.e. maximum mutual information (MMI), finds the rationale that maximizes the prediction performance based only on the rationale.
We introduce a game-theoretic invariant rationalization criterion where the rationales are constrained to enable the same predictor to be optimal across different environments.
We show both theoretically and empirically that the proposed rationales can rule out spurious correlations, generalize better to different test scenarios, and align better with human judgments.
arXiv Detail & Related papers (2020-03-22T00:50:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.