CeFlow: A Robust and Efficient Counterfactual Explanation Framework for
Tabular Data using Normalizing Flows
- URL: http://arxiv.org/abs/2303.14668v1
- Date: Sun, 26 Mar 2023 09:51:04 GMT
- Title: CeFlow: A Robust and Efficient Counterfactual Explanation Framework for
Tabular Data using Normalizing Flows
- Authors: Tri Dung Duong, Qian Li, Guandong Xu
- Abstract summary: Counterfactual explanation is a form of interpretable machine learning that generates perturbations on a sample to achieve the desired outcome.
State-of-the-art counterfactual explanation methods are proposed to use variational autoencoder (VAE) to achieve promising improvements.
We design a robust and efficient counterfactual explanation framework, namely CeFlow, which utilizes normalizing flows for the mixed-type of continuous and categorical features.
- Score: 11.108866104714627
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Counterfactual explanation is a form of interpretable machine learning that
generates perturbations on a sample to achieve the desired outcome. The
generated samples can act as instructions to guide end users on how to observe
the desired results by altering samples. Although state-of-the-art
counterfactual explanation methods are proposed to use variational autoencoder
(VAE) to achieve promising improvements, they suffer from two major
limitations: 1) the counterfactuals generation is prohibitively slow, which
prevents algorithms from being deployed in interactive environments; 2) the
counterfactual explanation algorithms produce unstable results due to the
randomness in the sampling procedure of variational autoencoder. In this work,
to address the above limitations, we design a robust and efficient
counterfactual explanation framework, namely CeFlow, which utilizes normalizing
flows for the mixed-type of continuous and categorical features. Numerical
experiments demonstrate that our technique compares favorably to
state-of-the-art methods. We release our source at
https://github.com/tridungduong16/fairCE.git for reproducing the results.
Related papers
- F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AI [15.314388210699443]
Fine-tuned Fidelity F-Fidelity is a robust evaluation framework for XAI.
We show that F-Fidelity significantly improves upon prior evaluation metrics in recovering the ground-truth ranking of explainers.
We also show that given a faithful explainer, F-Fidelity metric can be used to compute the sparsity of influential input components.
arXiv Detail & Related papers (2024-10-03T20:23:06Z) - Bidirectional Decoding: Improving Action Chunking via Closed-Loop Resampling [51.38330727868982]
Bidirectional Decoding (BID) is a test-time inference algorithm that bridges action chunking with closed-loop operations.
We show that BID boosts the performance of two state-of-the-art generative policies across seven simulation benchmarks and two real-world tasks.
arXiv Detail & Related papers (2024-08-30T15:39:34Z) - Purify Unlearnable Examples via Rate-Constrained Variational Autoencoders [101.42201747763178]
Unlearnable examples (UEs) seek to maximize testing error by making subtle modifications to training examples that are correctly labeled.
Our work provides a novel disentanglement mechanism to build an efficient pre-training purification method.
arXiv Detail & Related papers (2024-05-02T16:49:25Z) - Model-Based Counterfactual Explanations Incorporating Feature Space Attributes for Tabular Data [1.565361244756411]
Machine-learning models accurately predict patterns from large datasets.
Counterfactual explanations-methods explaining predictions by introducing input perturbations are prominent.
Current techniques require resolving the optimization problems for each input change, rendering them computationally expensive.
arXiv Detail & Related papers (2024-04-20T01:14:19Z) - Revisiting Edge Perturbation for Graph Neural Network in Graph Data
Augmentation and Attack [58.440711902319855]
Edge perturbation is a method to modify graph structures.
It can be categorized into two veins based on their effects on the performance of graph neural networks (GNNs)
We propose a unified formulation and establish a clear boundary between two categories of edge perturbation methods.
arXiv Detail & Related papers (2024-03-10T15:50:04Z) - Sample and Predict Your Latent: Modality-free Sequential Disentanglement
via Contrastive Estimation [2.7759072740347017]
We introduce a self-supervised sequential disentanglement framework based on contrastive estimation with no external signals.
In practice, we propose a unified, efficient, and easy-to-code sampling strategy for semantically similar and dissimilar views of the data.
Our method presents state-of-the-art results in comparison to existing techniques.
arXiv Detail & Related papers (2023-05-25T10:50:30Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - DiffTAD: Temporal Action Detection with Proposal Denoising Diffusion [137.8749239614528]
We propose a new formulation of temporal action detection (TAD) with denoising diffusion, DiffTAD.
Taking as input random temporal proposals, it can yield action proposals accurately given an untrimmed long video.
arXiv Detail & Related papers (2023-03-27T00:40:52Z) - VCNet: A self-explaining model for realistic counterfactual generation [52.77024349608834]
Counterfactual explanation is a class of methods to make local explanations of machine learning decisions.
We present VCNet-Variational Counter Net, a model architecture that combines a predictor and a counterfactual generator.
We show that VCNet is able to both generate predictions, and to generate counterfactual explanations without having to solve another minimisation problem.
arXiv Detail & Related papers (2022-12-21T08:45:32Z) - Causality-based Counterfactual Explanation for Classification Models [11.108866104714627]
We propose a prototype-based counterfactual explanation framework (ProCE)
ProCE is capable of preserving the causal relationship underlying the features of the counterfactual data.
In addition, we design a novel gradient-free optimization based on the multi-objective genetic algorithm that generates the counterfactual explanations.
arXiv Detail & Related papers (2021-05-03T09:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.