Towards Feasible Counterfactual Explanations: A Taxonomy Guided
Template-based NLG Method
- URL: http://arxiv.org/abs/2310.02019v1
- Date: Tue, 3 Oct 2023 12:48:57 GMT
- Title: Towards Feasible Counterfactual Explanations: A Taxonomy Guided
Template-based NLG Method
- Authors: Pedram Salimi, Nirmalie Wiratunga, David Corsar, Anjana Wijekoon
- Abstract summary: Counterfactual Explanations (cf-XAI) describe the smallest changes in feature values necessary to change an outcome from one class to another.
Many cf-XAI methods neglect the feasibility of those changes.
We introduce a novel approach for presenting cf-XAI in natural language (Natural-XAI)
- Score: 0.5003525838309206
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Counterfactual Explanations (cf-XAI) describe the smallest changes in feature
values necessary to change an outcome from one class to another. However, many
cf-XAI methods neglect the feasibility of those changes. In this paper, we
introduce a novel approach for presenting cf-XAI in natural language
(Natural-XAI), giving careful consideration to actionable and comprehensible
aspects while remaining cognizant of immutability and ethical concerns. We
present three contributions to this endeavor. Firstly, through a user study, we
identify two types of themes present in cf-XAI composed by humans:
content-related, focusing on how features and their values are included from
both the counterfactual and the query perspectives; and structure-related,
focusing on the structure and terminology used for describing necessary value
changes. Secondly, we introduce a feature actionability taxonomy with four
clearly defined categories, to streamline the explanation presentation process.
Using insights from the user study and our taxonomy, we created a generalisable
template-based natural language generation (NLG) method compatible with
existing explainers like DICE, NICE, and DisCERN, to produce counterfactuals
that address the aforementioned limitations of existing approaches. Finally, we
conducted a second user study to assess the performance of our taxonomy-guided
NLG templates on three domains. Our findings show that the taxonomy-guided
Natural-XAI approach (n-XAI^T) received higher user ratings across all
dimensions, with significantly improved results in the majority of the domains
assessed for articulation, acceptability, feasibility, and sensitivity
dimensions.
Related papers
- Beyond Coarse-Grained Matching in Video-Text Retrieval [50.799697216533914]
We introduce a new approach for fine-grained evaluation.
Our approach can be applied to existing datasets by automatically generating hard negative test captions.
Experiments on our fine-grained evaluations demonstrate that this approach enhances a model's ability to understand fine-grained differences.
arXiv Detail & Related papers (2024-10-16T09:42:29Z) - SCENE: Evaluating Explainable AI Techniques Using Soft Counterfactuals [0.0]
This paper introduces SCENE (Soft Counterfactual Evaluation for Natural language Explainability), a novel evaluation method.
By focusing on token-based substitutions, SCENE creates contextually appropriate and semantically meaningful Soft Counterfactuals.
SCENE provides valuable insights into the strengths and limitations of various XAI techniques.
arXiv Detail & Related papers (2024-08-08T16:36:24Z) - Optimal and efficient text counterfactuals using Graph Neural Networks [1.9939549451457024]
We propose a framework that achieves the aforementioned by generating semantically edited inputs, known as counterfactual interventions.
We test our framework on two NLP tasks - binary sentiment classification and topic classification - and show that the generated edits are contrastive, fluent and minimal.
arXiv Detail & Related papers (2024-08-04T09:09:13Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - How Well Do Text Embedding Models Understand Syntax? [50.440590035493074]
The ability of text embedding models to generalize across a wide range of syntactic contexts remains under-explored.
Our findings reveal that existing text embedding models have not sufficiently addressed these syntactic understanding challenges.
We propose strategies to augment the generalization ability of text embedding models in diverse syntactic scenarios.
arXiv Detail & Related papers (2023-11-14T08:51:00Z) - Prompting or Fine-tuning? A Comparative Study of Large Language Models
for Taxonomy Construction [0.8670827427401335]
We present a general framework for taxonomy construction that takes into account structural constraints.
We compare the prompting and fine-tuning approaches performed on a hypernym taxonomy and a novel computer science taxonomy dataset.
arXiv Detail & Related papers (2023-09-04T16:53:17Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - BERT-ERC: Fine-tuning BERT is Enough for Emotion Recognition in
Conversation [19.663265448700002]
Previous works on emotion recognition in conversation (ERC) follow a two-step paradigm.
We propose a novel paradigm, i.e., exploring contextual information and dialogue structure information in the fine-tuning step.
We develop our model BERT-ERC according to the proposed paradigm, which improves ERC performance in three aspects.
arXiv Detail & Related papers (2023-01-17T08:03:32Z) - INTERACTION: A Generative XAI Framework for Natural Language Inference
Explanations [58.062003028768636]
Current XAI approaches only focus on delivering a single explanation.
This paper proposes a generative XAI framework, INTERACTION (explaIn aNd predicT thEn queRy with contextuAl CondiTional varIational autO-eNcoder)
Our novel framework presents explanation in two steps: (step one) Explanation and Label Prediction; and (step two) Diverse Evidence Generation.
arXiv Detail & Related papers (2022-09-02T13:52:39Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.