Introducing User Feedback-based Counterfactual Explanations (UFCE)
- URL: http://arxiv.org/abs/2403.00011v1
- Date: Mon, 26 Feb 2024 20:09:44 GMT
- Title: Introducing User Feedback-based Counterfactual Explanations (UFCE)
- Authors: Muhammad Suffian, Jose M. Alonso-Moral, Alessandro Bogliolo
- Abstract summary: Counterfactual explanations (CEs) have emerged as a viable solution for generating comprehensible explanations in XAI.
UFCE allows for the inclusion of user constraints to determine the smallest modifications in the subset of actionable features.
UFCE outperforms two well-known CE methods in terms of textitproximity, textitsparsity, and textitfeasibility.
- Score: 49.1574468325115
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning models are widely used in real-world applications. However,
their complexity makes it often challenging to interpret the rationale behind
their decisions. Counterfactual explanations (CEs) have emerged as a viable
solution for generating comprehensible explanations in eXplainable Artificial
Intelligence (XAI). CE provides actionable information to users on how to
achieve the desired outcome with minimal modifications to the input. However,
current CE algorithms usually operate within the entire feature space when
optimizing changes to turn over an undesired outcome, overlooking the
identification of key contributors to the outcome and disregarding the
practicality of the suggested changes. In this study, we introduce a novel
methodology, that is named as user feedback-based counterfactual explanation
(UFCE), which addresses these limitations and aims to bolster confidence in the
provided explanations. UFCE allows for the inclusion of user constraints to
determine the smallest modifications in the subset of actionable features while
considering feature dependence, and evaluates the practicality of suggested
changes using benchmark evaluation metrics. We conducted three experiments with
five datasets, demonstrating that UFCE outperforms two well-known CE methods in
terms of \textit{proximity}, \textit{sparsity}, and \textit{feasibility}.
Reported results indicate that user constraints influence the generation of
feasible CEs.
Related papers
- Refining Counterfactual Explanations With Joint-Distribution-Informed Shapley Towards Actionable Minimality [6.770853093478073]
Counterfactual explanations (CE) identify data points that closely resemble the observed data but produce different machine learning (ML) model outputs.
Existing CE methods often lack actionable efficiency because of unnecessary feature changes included within the explanations.
We propose a method that minimizes the required feature changes while maintaining the validity of CE.
arXiv Detail & Related papers (2024-10-07T18:31:19Z) - Flexible and Robust Counterfactual Explanations with Minimal Satisfiable
Perturbations [56.941276017696076]
We propose a conceptually simple yet effective solution named Counterfactual Explanations with Minimal Satisfiable Perturbations (CEMSP)
CEMSP constrains changing values of abnormal features with the help of their semantically meaningful normal ranges.
Compared to existing methods, we conduct comprehensive experiments on both synthetic and real-world datasets to demonstrate that our method provides more robust explanations while preserving flexibility.
arXiv Detail & Related papers (2023-09-09T04:05:56Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - Counterfactual Explanations Using Optimization With Constraint Learning [0.0]
We propose a generic and flexible approach to counterfactual explanations using optimization with constraint learning (CE-OCL)
Specifically, we discuss how we can leverage an optimization with constraint learning framework for the generation of counterfactual explanations.
We also propose two novel modeling approaches to address data manifold closeness and diversity, which are two key criteria for practical counterfactual explanations.
arXiv Detail & Related papers (2022-09-22T13:27:21Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model
Behavior [26.248879735549277]
We cast model explanation as the causal inference problem of estimating causal effects of real-world concepts on the output behavior of ML models.
We introduce CEBaB, a new benchmark dataset for assessing concept-based explanation methods in Natural Language Processing (NLP)
We use CEBaB to compare the quality of a range of concept-based explanation methods covering different assumptions and conceptions of the problem.
arXiv Detail & Related papers (2022-05-27T17:59:14Z) - Let's Go to the Alien Zoo: Introducing an Experimental Framework to
Study Usability of Counterfactual Explanations for Machine Learning [6.883906273999368]
Counterfactual explanations (CFEs) have gained traction as a psychologically grounded approach to generate post-hoc explanations.
We introduce the Alien Zoo, an engaging, web-based and game-inspired experimental framework.
As a proof of concept, we demonstrate the practical efficacy and feasibility of this approach in a user study.
arXiv Detail & Related papers (2022-05-06T17:57:05Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - DisCERN:Discovering Counterfactual Explanations using Relevance Features
from Neighbourhoods [1.9706200133168679]
DisCERN is an effective strategy to minimise actionable changes necessary to create good counterfactual explanations.
We show how widely adopted feature relevance-based explainers can inform DisCERN to identify the minimum subset of "actionable features"
Our results demonstrate that DisCERN is an effective strategy to minimise actionable changes necessary to create good counterfactual explanations.
arXiv Detail & Related papers (2021-09-13T09:25:25Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.