Invariant Rationalization
- URL: http://arxiv.org/abs/2003.09772v1
- Date: Sun, 22 Mar 2020 00:50:27 GMT
- Title: Invariant Rationalization
- Authors: Shiyu Chang, Yang Zhang, Mo Yu, Tommi S. Jaakkola
- Abstract summary: A typical rationalization criterion, i.e. maximum mutual information (MMI), finds the rationale that maximizes the prediction performance based only on the rationale.
We introduce a game-theoretic invariant rationalization criterion where the rationales are constrained to enable the same predictor to be optimal across different environments.
We show both theoretically and empirically that the proposed rationales can rule out spurious correlations, generalize better to different test scenarios, and align better with human judgments.
- Score: 84.1861516092232
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Selective rationalization improves neural network interpretability by
identifying a small subset of input features -- the rationale -- that best
explains or supports the prediction. A typical rationalization criterion, i.e.
maximum mutual information (MMI), finds the rationale that maximizes the
prediction performance based only on the rationale. However, MMI can be
problematic because it picks up spurious correlations between the input
features and the output. Instead, we introduce a game-theoretic invariant
rationalization criterion where the rationales are constrained to enable the
same predictor to be optimal across different environments. We show both
theoretically and empirically that the proposed rationales can rule out
spurious correlations, generalize better to different test scenarios, and align
better with human judgments. Our data and code are available.
Related papers
- Enhancing the Rationale-Input Alignment for Self-explaining
Rationalization [22.74436500022893]
We introduce a novel approach called DAR (textbfDiscriminatively textbfAligned textbfRationalization) to align the selected rationale and the original input.
Experiments on two widely used real-world benchmarks show that the proposed method significantly improves the explanation quality.
arXiv Detail & Related papers (2023-12-07T07:37:15Z) - Towards Trustworthy Explanation: On Causal Rationalization [9.48539398357156]
We propose a new model of rationalization based on two causal desiderata, non-spuriousness and efficiency.
The superior performance of the proposed causal rationalization is demonstrated on real-world review and medical datasets.
arXiv Detail & Related papers (2023-06-25T03:34:06Z) - Unsupervised Selective Rationalization with Noise Injection [7.17737088382948]
unsupervised selective rationalization produces rationales alongside predictions by chaining two jointly-trained components, a rationale generator and a predictor.
We introduce a novel training technique that effectively limits generation of implausible rationales by injecting noise between the generator and the predictor.
We achieve sizeable improvements in rationale plausibility and task accuracy over the state-of-the-art across a variety of tasks, including our new benchmark.
arXiv Detail & Related papers (2023-05-27T17:34:36Z) - Bounding Counterfactuals under Selection Bias [60.55840896782637]
We propose a first algorithm to address both identifiable and unidentifiable queries.
We prove that, in spite of the missingness induced by the selection bias, the likelihood of the available data is unimodal.
arXiv Detail & Related papers (2022-07-26T10:33:10Z) - Rationale-Augmented Ensembles in Language Models [53.45015291520658]
We reconsider rationale-augmented prompting for few-shot in-context learning.
We identify rationale sampling in the output space as the key component to robustly improve performance.
We demonstrate that rationale-augmented ensembles achieve more accurate and interpretable results than existing prompting approaches.
arXiv Detail & Related papers (2022-07-02T06:20:57Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - Understanding Interlocking Dynamics of Cooperative Rationalization [90.6863969334526]
Selective rationalization explains the prediction of complex neural networks by finding a small subset of the input that is sufficient to predict the neural model output.
We reveal a major problem with such cooperative rationalization paradigm -- model interlocking.
We propose a new rationalization framework, called A2R, which introduces a third component into the architecture, a predictor driven by soft attention as opposed to selection.
arXiv Detail & Related papers (2021-10-26T17:39:18Z) - Rationales for Sequential Predictions [117.93025782838123]
Sequence models are a critical component of modern NLP systems, but their predictions are difficult to explain.
We consider model explanations though rationales, subsets of context that can explain individual model predictions.
We propose an efficient greedy algorithm to approximate this objective.
arXiv Detail & Related papers (2021-09-14T01:25:15Z) - SPECTRA: Sparse Structured Text Rationalization [0.0]
We present a unified framework for deterministic extraction of structured explanations via constrained inference on a factor graph.
Our approach greatly eases training and rationale regularization, generally outperforming previous work on plausibility extracted explanations.
arXiv Detail & Related papers (2021-09-09T20:39:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.