INTERACTION: A Generative XAI Framework for Natural Language Inference
Explanations
- URL: http://arxiv.org/abs/2209.01061v1
- Date: Fri, 2 Sep 2022 13:52:39 GMT
- Title: INTERACTION: A Generative XAI Framework for Natural Language Inference
Explanations
- Authors: Jialin Yu, Alexandra I. Cristea, Anoushka Harit, Zhongtian Sun,
Olanrewaju Tahir Aduragba, Lei Shi, Noura Al Moubayed
- Abstract summary: Current XAI approaches only focus on delivering a single explanation.
This paper proposes a generative XAI framework, INTERACTION (explaIn aNd predicT thEn queRy with contextuAl CondiTional varIational autO-eNcoder)
Our novel framework presents explanation in two steps: (step one) Explanation and Label Prediction; and (step two) Diverse Evidence Generation.
- Score: 58.062003028768636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: XAI with natural language processing aims to produce human-readable
explanations as evidence for AI decision-making, which addresses explainability
and transparency. However, from an HCI perspective, the current approaches only
focus on delivering a single explanation, which fails to account for the
diversity of human thoughts and experiences in language. This paper thus
addresses this gap, by proposing a generative XAI framework, INTERACTION
(explaIn aNd predicT thEn queRy with contextuAl CondiTional varIational
autO-eNcoder). Our novel framework presents explanation in two steps: (step
one) Explanation and Label Prediction; and (step two) Diverse Evidence
Generation. We conduct intensive experiments with the Transformer architecture
on a benchmark dataset, e-SNLI. Our method achieves competitive or better
performance against state-of-the-art baseline models on explanation generation
(up to 4.7% gain in BLEU) and prediction (up to 4.4% gain in accuracy) in step
one; it can also generate multiple diverse explanations in step two.
Related papers
- Is Contrasting All You Need? Contrastive Learning for the Detection and Attribution of AI-generated Text [4.902089836908786]
WhosAI is a triplet-network contrastive learning framework designed to predict whether a given input text has been generated by humans or AI.
We show that our proposed framework achieves outstanding results in both the Turing Test and Authorship tasks.
arXiv Detail & Related papers (2024-07-12T15:44:56Z) - Solving the enigma: Deriving optimal explanations of deep networks [3.9584068556746246]
We propose a novel framework designed to enhance the explainability of deep networks.
Our framework integrates various explanations from established XAI methods and employs a non-explanation to construct an optimal explanation.
Our results suggest that optimal explanations based on specific criteria are derivable.
arXiv Detail & Related papers (2024-05-16T11:49:08Z) - Explaining Text Similarity in Transformer Models [52.571158418102584]
Recent advances in explainable AI have made it possible to mitigate limitations by leveraging improved explanations for Transformers.
We use BiLRP, an extension developed for computing second-order explanations in bilinear similarity models, to investigate which feature interactions drive similarity in NLP models.
Our findings contribute to a deeper understanding of different semantic similarity tasks and models, highlighting how novel explainable AI methods enable in-depth analyses and corpus-level insights.
arXiv Detail & Related papers (2024-05-10T17:11:31Z) - Towards Feasible Counterfactual Explanations: A Taxonomy Guided
Template-based NLG Method [0.5003525838309206]
Counterfactual Explanations (cf-XAI) describe the smallest changes in feature values necessary to change an outcome from one class to another.
Many cf-XAI methods neglect the feasibility of those changes.
We introduce a novel approach for presenting cf-XAI in natural language (Natural-XAI)
arXiv Detail & Related papers (2023-10-03T12:48:57Z) - DiaASQ : A Benchmark of Conversational Aspect-based Sentiment Quadruple
Analysis [84.80347062834517]
We introduce DiaASQ, aiming to detect the quadruple of target-aspect-opinion-sentiment in a dialogue.
We manually construct a large-scale high-quality DiaASQ dataset in both Chinese and English languages.
We develop a neural model to benchmark the task, which advances in effectively performing end-to-end quadruple prediction.
arXiv Detail & Related papers (2022-11-10T17:18:20Z) - On the Evaluation of the Plausibility and Faithfulness of Sentiment
Analysis Explanations [2.071923272918415]
We propose different metrics and techniques to evaluate the explainability of SA models from two angles.
First, we evaluate the strength of the extracted "rationales" in faithfully explaining the predicted outcome.
Second, we measure the agreement between ExAI methods and human judgment on a homegrown dataset.
arXiv Detail & Related papers (2022-10-13T11:29:17Z) - A Latent-Variable Model for Intrinsic Probing [93.62808331764072]
We propose a novel latent-variable formulation for constructing intrinsic probes.
We find empirical evidence that pre-trained representations develop a cross-lingually entangled notion of morphosyntax.
arXiv Detail & Related papers (2022-01-20T15:01:12Z) - Aspect Sentiment Quad Prediction as Paraphrase Generation [53.33072918744124]
We introduce the Aspect Sentiment Quad Prediction (ASQP) task, aiming to jointly detect all sentiment elements in quads for a given opinionated sentence.
We propose a novel textscParaphrase modeling paradigm to cast the ASQP task to a paraphrase generation process.
On the other hand, the semantics of the sentiment elements can be fully exploited by learning to generate them in the natural language form.
arXiv Detail & Related papers (2021-10-02T12:57:27Z) - Generative Language-Grounded Policy in Vision-and-Language Navigation
with Bayes' Rule [80.0853069632445]
Vision-and-language navigation (VLN) is a task in which an agent is embodied in a realistic 3D environment and follows an instruction to reach the goal node.
In this paper, we design and investigate a generative language-grounded policy which uses a language model to compute the distribution over all possible instructions.
In experiments, we show that the proposed generative approach outperforms the discriminative approach in the Room-2-Room (R2R) and Room-4-Room (R4R) datasets, especially in the unseen environments.
arXiv Detail & Related papers (2020-09-16T16:23:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.