Rationalization for Explainable NLP: A Survey
- URL: http://arxiv.org/abs/2301.08912v1
- Date: Sat, 21 Jan 2023 07:58:03 GMT
- Title: Rationalization for Explainable NLP: A Survey
- Authors: Sai Gurrapu, Ajay Kulkarni, Lifu Huang, Ismini Lourentzou, Laura
Freeman, Feras A. Batarseh
- Abstract summary: Black-box models make it difficult to understand the internals of a system and the process it takes to arrive at an output.
Numerical (LIME, Shapley) and visualization (saliency heatmap) explainability techniques are helpful; however, they are insufficient because they require specialized knowledge.
Rationalization justifies a model's output by providing a natural language explanation (rationale)
- Score: 6.843420921654749
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recent advances in deep learning have improved the performance of many
Natural Language Processing (NLP) tasks such as translation,
question-answering, and text classification. However, this improvement comes at
the expense of model explainability. Black-box models make it difficult to
understand the internals of a system and the process it takes to arrive at an
output. Numerical (LIME, Shapley) and visualization (saliency heatmap)
explainability techniques are helpful; however, they are insufficient because
they require specialized knowledge. These factors led rationalization to emerge
as a more accessible explainable technique in NLP. Rationalization justifies a
model's output by providing a natural language explanation (rationale). Recent
improvements in natural language generation have made rationalization an
attractive technique because it is intuitive, human-comprehensible, and
accessible to non-technical users. Since rationalization is a relatively new
field, it is disorganized. As the first survey, rationalization literature in
NLP from 2007-2022 is analyzed. This survey presents available methods,
explainable evaluations, code, and datasets used across various NLP tasks that
use rationalization. Further, a new subfield in Explainable AI (XAI), namely,
Rational AI (RAI), is introduced to advance the current state of
rationalization. A discussion on observed insights, challenges, and future
directions is provided to point to promising research opportunities.
Related papers
- Make LLMs better zero-shot reasoners: Structure-orientated autonomous reasoning [52.83539473110143]
We introduce a novel structure-oriented analysis method to help Large Language Models (LLMs) better understand a question.
To further improve the reliability in complex question-answering tasks, we propose a multi-agent reasoning system, Structure-oriented Autonomous Reasoning Agents (SARA)
Extensive experiments verify the effectiveness of the proposed reasoning system. Surprisingly, in some cases, the system even surpasses few-shot methods.
arXiv Detail & Related papers (2024-10-18T05:30:33Z) - Characterizing Large Language Models as Rationalizers of
Knowledge-intensive Tasks [6.51301154858045]
Large language models (LLMs) are proficient at generating fluent text with minimal task-specific supervision.
We consider the task of generating knowledge-guided rationalization in natural language by using expert-written examples in a few-shot manner.
Surprisingly, crowd-workers preferred knowledge-grounded rationales over crowdsourced rationalizations, citing their factuality, sufficiency, and comprehensive refutations.
arXiv Detail & Related papers (2023-11-09T01:04:44Z) - Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models [56.34029644009297]
Large language models (LLMs) have demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems.
LLMs excel most in abductive reasoning, followed by deductive reasoning, while they are least effective at inductive reasoning.
We study single-task training, multi-task training, and "chain-of-thought" knowledge distillation fine-tuning technique to assess the performance of model.
arXiv Detail & Related papers (2023-10-02T01:00:50Z) - Situated Natural Language Explanations [54.083715161895036]
Natural language explanations (NLEs) are among the most accessible tools for explaining decisions to humans.
Existing NLE research perspectives do not take the audience into account.
Situated NLE provides a perspective and facilitates further research on the generation and evaluation of explanations.
arXiv Detail & Related papers (2023-08-27T14:14:28Z) - Testing the effectiveness of saliency-based explainability in NLP using
randomized survey-based experiments [0.6091702876917281]
A lot of work in Explainable AI has aimed to devise explanation methods that give humans insights into the workings and predictions of NLP models.
Innate human tendencies and biases can handicap the understanding of these explanations in humans.
We designed a randomized survey-based experiment to understand the effectiveness of saliency-based Post-hoc explainability methods in Natural Language Processing.
arXiv Detail & Related papers (2022-11-25T08:49:01Z) - Towards Formal Approximated Minimal Explanations of Neural Networks [0.0]
Deep neural networks (DNNs) are now being used in numerous domains.
DNNs are "black-boxes", and cannot be interpreted by humans.
We propose an efficient, verification-based method for finding minimal explanations.
arXiv Detail & Related papers (2022-10-25T11:06:37Z) - Explanations from Large Language Models Make Small Reasoners Better [61.991772773700006]
We show that our method can consistently and significantly outperform finetuning baselines across different settings.
As a side benefit, human evaluation shows that our method can generate high-quality explanations to justify its predictions.
arXiv Detail & Related papers (2022-10-13T04:50:02Z) - NELLIE: A Neuro-Symbolic Inference Engine for Grounded, Compositional, and Explainable Reasoning [59.16962123636579]
This paper proposes a new take on Prolog-based inference engines.
We replace handcrafted rules with a combination of neural language modeling, guided generation, and semi dense retrieval.
Our implementation, NELLIE, is the first system to demonstrate fully interpretable, end-to-end grounded QA.
arXiv Detail & Related papers (2022-09-16T00:54:44Z) - A Survey of the State of Explainable AI for Natural Language Processing [16.660110121500125]
This survey presents an overview of the current state of Explainable AI (XAI)
We discuss the main categorization of explanations, as well as the various ways explanations can be arrived at and visualized.
We detail the operations and explainability techniques currently available for generating explanations for NLP model predictions, to serve as a resource for model developers in the community.
arXiv Detail & Related papers (2020-10-01T22:33:21Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.