Why do you think that? Exploring Faithful Sentence-Level Rationales
Without Supervision
- URL: http://arxiv.org/abs/2010.03384v1
- Date: Wed, 7 Oct 2020 12:54:28 GMT
- Title: Why do you think that? Exploring Faithful Sentence-Level Rationales
Without Supervision
- Authors: Max Glockner, Ivan Habernal, Iryna Gurevych
- Abstract summary: We propose a differentiable training-framework to create models which output faithful rationales on a sentence level.
Our model solves the task based on each rationale individually and learns to assign high scores to those which solved the task best.
- Score: 60.62434362997016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Evaluating the trustworthiness of a model's prediction is essential for
differentiating between `right for the right reasons' and `right for the wrong
reasons'. Identifying textual spans that determine the target label, known as
faithful rationales, usually relies on pipeline approaches or reinforcement
learning. However, such methods either require supervision and thus costly
annotation of the rationales or employ non-differentiable models. We propose a
differentiable training-framework to create models which output faithful
rationales on a sentence level, by solely applying supervision on the target
task. To achieve this, our model solves the task based on each rationale
individually and learns to assign high scores to those which solved the task
best. Our evaluation on three different datasets shows competitive results
compared to a standard BERT blackbox while exceeding a pipeline counterpart's
performance in two cases. We further exploit the transparent decision-making
process of these models to prefer selecting the correct rationales by applying
direct supervision, thereby boosting the performance on the rationale-level.
Related papers
- Evaluating Human Alignment and Model Faithfulness of LLM Rationale [66.75309523854476]
We study how well large language models (LLMs) explain their generations through rationales.
We show that prompting-based methods are less "faithful" than attribution-based explanations.
arXiv Detail & Related papers (2024-06-28T20:06:30Z) - Improving Language Model Reasoning with Self-motivated Learning [60.779625789039486]
textitSelf-motivated Learning framework motivates the model itself to automatically generate rationales on existing datasets.
We train a reward model with the rank to evaluate the quality of rationales, and improve the performance of reasoning through reinforcement learning.
arXiv Detail & Related papers (2024-04-10T14:05:44Z) - Plausible Extractive Rationalization through Semi-Supervised Entailment
Signal [33.35604728012685]
We take a semi-supervised approach to optimize for the plausibility of extracted rationales.
We adopt a pre-trained natural language inference (NLI) model and further fine-tune it on a small set of supervised rationales.
We show that, by enforcing the alignment agreement between the explanation and answer in a question-answering task, the performance can be improved without access to ground truth labels.
arXiv Detail & Related papers (2024-02-13T14:12:32Z) - Boosting the Power of Small Multimodal Reasoning Models to Match Larger Models with Self-Consistency Training [49.3242278912771]
Multimodal reasoning is a challenging task that requires models to reason across multiple modalities to answer questions.
Existing approaches have made progress by incorporating language and visual modalities into a two-stage reasoning framework.
We propose MC-CoT, a self-consistency training strategy that generates multiple rationales and answers, subsequently selecting the most accurate through a voting process.
arXiv Detail & Related papers (2023-11-23T17:09:48Z) - Leveraging Uncertainty Estimates To Improve Classifier Performance [4.4951754159063295]
Binary classification involves predicting the label of an instance based on whether the model score for the positive class exceeds a threshold chosen based on the application requirements.
However, model scores are often not aligned with the true positivity rate.
This is especially true when the training involves a differential sampling across classes or there is distributional drift between train and test settings.
arXiv Detail & Related papers (2023-11-20T12:40:25Z) - Make Your Decision Convincing! A Unified Two-Stage Framework:
Self-Attribution and Decision-Making [24.906886146275127]
We propose a unified two-stage framework known as Self-Attribution and Decision-Making (SADM)
We demonstrate that our framework not only establishes a more reliable link between the generated rationale and model decision but also achieves competitive results in task performance and the quality of rationale.
arXiv Detail & Related papers (2023-10-20T15:59:57Z) - Evaluating the Evaluators: Are Current Few-Shot Learning Benchmarks Fit
for Purpose? [11.451691772914055]
This paper presents the first investigation into task-level evaluation.
We measure the accuracy of performance estimators in the few-shot setting.
We examine the reasons for the failure of evaluators usually thought of as being robust.
arXiv Detail & Related papers (2023-07-06T02:31:38Z) - Robust Outlier Rejection for 3D Registration with Variational Bayes [70.98659381852787]
We develop a novel variational non-local network-based outlier rejection framework for robust alignment.
We propose a voting-based inlier searching strategy to cluster the high-quality hypothetical inliers for transformation estimation.
arXiv Detail & Related papers (2023-04-04T03:48:56Z) - An Interpretable Loan Credit Evaluation Method Based on Rule
Representation Learner [8.08640000394814]
We design an intrinsically interpretable model based on RRL(Rule Representation) for the Lending Club dataset.
During the training, we learned tricks from previous research to effectively train binary weights.
Our model is used to test the correctness of the explanations generated by the post-hoc method.
arXiv Detail & Related papers (2023-04-03T05:55:04Z) - An Information Bottleneck Approach for Controlling Conciseness in
Rationale Extraction [84.49035467829819]
We show that it is possible to better manage this trade-off by optimizing a bound on the Information Bottleneck (IB) objective.
Our fully unsupervised approach jointly learns an explainer that predicts sparse binary masks over sentences, and an end-task predictor that considers only the extracted rationale.
arXiv Detail & Related papers (2020-05-01T23:26:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.