Causality-based Counterfactual Explanation for Classification Models
- URL: http://arxiv.org/abs/2105.00703v3
- Date: Sun, 26 Mar 2023 09:42:54 GMT
- Title: Causality-based Counterfactual Explanation for Classification Models
- Authors: Tri Dung Duong, Qian Li, Guandong Xu
- Abstract summary: We propose a prototype-based counterfactual explanation framework (ProCE)
ProCE is capable of preserving the causal relationship underlying the features of the counterfactual data.
In addition, we design a novel gradient-free optimization based on the multi-objective genetic algorithm that generates the counterfactual explanations.
- Score: 11.108866104714627
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Counterfactual explanation is one branch of interpretable machine learning
that produces a perturbation sample to change the model's original decision.
The generated samples can act as a recommendation for end-users to achieve
their desired outputs. Most of the current counterfactual explanation
approaches are the gradient-based method, which can only optimize the
differentiable loss functions with continuous variables. Accordingly, the
gradient-free methods are proposed to handle the categorical variables, which
however have several major limitations: 1) causal relationships among features
are typically ignored when generating the counterfactuals, possibly resulting
in impractical guidelines for decision-makers; 2) the counterfactual
explanation algorithm requires a great deal of effort into parameter tuning for
dertermining the optimal weight for each loss functions which must be conducted
repeatedly for different datasets and settings. In this work, to address the
above limitations, we propose a prototype-based counterfactual explanation
framework (ProCE). ProCE is capable of preserving the causal relationship
underlying the features of the counterfactual data. In addition, we design a
novel gradient-free optimization based on the multi-objective genetic algorithm
that generates the counterfactual explanations for the mixed-type of continuous
and categorical features. Numerical experiments demonstrate that our method
compares favorably with state-of-the-art methods and therefore is applicable to
existing prediction models. All the source codes and data are available at
\url{https://github.com/tridungduong16/multiobj-scm-cf}.
Related papers
- S-CFE: Simple Counterfactual Explanations [21.975560789792073]
We tackle the problem of finding manifold-aligned counterfactual explanations for sparse data.
Our approach effectively produces sparse, manifold-aligned counterfactual explanations.
arXiv Detail & Related papers (2024-10-21T07:42:43Z) - CeFlow: A Robust and Efficient Counterfactual Explanation Framework for
Tabular Data using Normalizing Flows [11.108866104714627]
Counterfactual explanation is a form of interpretable machine learning that generates perturbations on a sample to achieve the desired outcome.
State-of-the-art counterfactual explanation methods are proposed to use variational autoencoder (VAE) to achieve promising improvements.
We design a robust and efficient counterfactual explanation framework, namely CeFlow, which utilizes normalizing flows for the mixed-type of continuous and categorical features.
arXiv Detail & Related papers (2023-03-26T09:51:04Z) - Bayesian Hierarchical Models for Counterfactual Estimation [12.159830463756341]
We propose a probabilistic paradigm to estimate a diverse set of counterfactuals.
We treat the perturbations as random variables endowed with prior distribution functions.
A gradient based sampler with superior convergence characteristics efficiently computes the posterior samples.
arXiv Detail & Related papers (2023-01-21T00:21:11Z) - CEnt: An Entropy-based Model-agnostic Explainability Framework to
Contrast Classifiers' Decisions [2.543865489517869]
We present a novel approach to locally contrast the prediction of any classifier.
Our Contrastive Entropy-based explanation method, CEnt, approximates a model locally by a decision tree to compute entropy information of different feature splits.
CEnt is the first non-gradient-based contrastive method generating diverse counterfactuals that do not necessarily exist in the training data while satisfying immutability (ex. race) and semi-immutability (ex. age can only change in an increasing direction)
arXiv Detail & Related papers (2023-01-19T08:23:34Z) - Fine-grained Retrieval Prompt Tuning [149.9071858259279]
Fine-grained Retrieval Prompt Tuning steers a frozen pre-trained model to perform the fine-grained retrieval task from the perspectives of sample prompt and feature adaptation.
Our FRPT with fewer learnable parameters achieves the state-of-the-art performance on three widely-used fine-grained datasets.
arXiv Detail & Related papers (2022-07-29T04:10:04Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - Multivariate Probabilistic Regression with Natural Gradient Boosting [63.58097881421937]
We propose a Natural Gradient Boosting (NGBoost) approach based on nonparametrically modeling the conditional parameters of the multivariate predictive distribution.
Our method is robust, works out-of-the-box without extensive tuning, is modular with respect to the assumed target distribution, and performs competitively in comparison to existing approaches.
arXiv Detail & Related papers (2021-06-07T17:44:49Z) - Model-agnostic and Scalable Counterfactual Explanations via
Reinforcement Learning [0.5729426778193398]
We propose a deep reinforcement learning approach that transforms the optimization procedure into an end-to-end learnable process.
Our experiments on real-world data show that our method is model-agnostic, relying only on feedback from model predictions.
arXiv Detail & Related papers (2021-06-04T16:54:36Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Implicit differentiation of Lasso-type models for hyperparameter
optimization [82.73138686390514]
We introduce an efficient implicit differentiation algorithm, without matrix inversion, tailored for Lasso-type problems.
Our approach scales to high-dimensional data by leveraging the sparsity of the solutions.
arXiv Detail & Related papers (2020-02-20T18:43:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.