CeFlow: A Robust and Efficient Counterfactual Explanation Framework for
Tabular Data using Normalizing Flows
- URL: http://arxiv.org/abs/2303.14668v1
- Date: Sun, 26 Mar 2023 09:51:04 GMT
- Title: CeFlow: A Robust and Efficient Counterfactual Explanation Framework for
Tabular Data using Normalizing Flows
- Authors: Tri Dung Duong, Qian Li, Guandong Xu
- Abstract summary: Counterfactual explanation is a form of interpretable machine learning that generates perturbations on a sample to achieve the desired outcome.
State-of-the-art counterfactual explanation methods are proposed to use variational autoencoder (VAE) to achieve promising improvements.
We design a robust and efficient counterfactual explanation framework, namely CeFlow, which utilizes normalizing flows for the mixed-type of continuous and categorical features.
- Score: 11.108866104714627
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Counterfactual explanation is a form of interpretable machine learning that
generates perturbations on a sample to achieve the desired outcome. The
generated samples can act as instructions to guide end users on how to observe
the desired results by altering samples. Although state-of-the-art
counterfactual explanation methods are proposed to use variational autoencoder
(VAE) to achieve promising improvements, they suffer from two major
limitations: 1) the counterfactuals generation is prohibitively slow, which
prevents algorithms from being deployed in interactive environments; 2) the
counterfactual explanation algorithms produce unstable results due to the
randomness in the sampling procedure of variational autoencoder. In this work,
to address the above limitations, we design a robust and efficient
counterfactual explanation framework, namely CeFlow, which utilizes normalizing
flows for the mixed-type of continuous and categorical features. Numerical
experiments demonstrate that our technique compares favorably to
state-of-the-art methods. We release our source at
https://github.com/tridungduong16/fairCE.git for reproducing the results.
Related papers
- Purify Unlearnable Examples via Rate-Constrained Variational Autoencoders [101.42201747763178]
Unlearnable examples (UEs) seek to maximize testing error by making subtle modifications to training examples that are correctly labeled.
Our work provides a novel disentanglement mechanism to build an efficient pre-training purification method.
arXiv Detail & Related papers (2024-05-02T16:49:25Z) - Model-Based Counterfactual Explanations Incorporating Feature Space Attributes for Tabular Data [1.565361244756411]
Machine-learning models accurately predict patterns from large datasets.
Counterfactual explanations-methods explaining predictions by introducing input perturbations are prominent.
Current techniques require resolving the optimization problems for each input change, rendering them computationally expensive.
arXiv Detail & Related papers (2024-04-20T01:14:19Z) - Revisiting Edge Perturbation for Graph Neural Network in Graph Data
Augmentation and Attack [58.440711902319855]
Edge perturbation is a method to modify graph structures.
It can be categorized into two veins based on their effects on the performance of graph neural networks (GNNs)
We propose a unified formulation and establish a clear boundary between two categories of edge perturbation methods.
arXiv Detail & Related papers (2024-03-10T15:50:04Z) - Guided Flows for Generative Modeling and Decision Making [55.42634941614435]
We show that Guided Flows significantly improves the sample quality in conditional image generation and zero-shot text synthesis-to-speech.
Notably, we are first to apply flow models for plan generation in the offline reinforcement learning setting ax speedup in compared to diffusion models.
arXiv Detail & Related papers (2023-11-22T15:07:59Z) - Approximating Score-based Explanation Techniques Using Conformal
Regression [0.1843404256219181]
Score-based explainable machine-learning techniques are often used to understand the logic behind black-box models.
We propose and investigate the use of computationally less costly regression models for approximating the output of score-based explanation techniques, such as SHAP.
We present results from a large-scale empirical investigation, in which the approximate explanations generated by our proposed models are evaluated with respect to efficiency.
arXiv Detail & Related papers (2023-08-23T07:50:43Z) - Sample and Predict Your Latent: Modality-free Sequential Disentanglement
via Contrastive Estimation [2.7759072740347017]
We introduce a self-supervised sequential disentanglement framework based on contrastive estimation with no external signals.
In practice, we propose a unified, efficient, and easy-to-code sampling strategy for semantically similar and dissimilar views of the data.
Our method presents state-of-the-art results in comparison to existing techniques.
arXiv Detail & Related papers (2023-05-25T10:50:30Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - DiffTAD: Temporal Action Detection with Proposal Denoising Diffusion [137.8749239614528]
We propose a new formulation of temporal action detection (TAD) with denoising diffusion, DiffTAD.
Taking as input random temporal proposals, it can yield action proposals accurately given an untrimmed long video.
arXiv Detail & Related papers (2023-03-27T00:40:52Z) - VCNet: A self-explaining model for realistic counterfactual generation [52.77024349608834]
Counterfactual explanation is a class of methods to make local explanations of machine learning decisions.
We present VCNet-Variational Counter Net, a model architecture that combines a predictor and a counterfactual generator.
We show that VCNet is able to both generate predictions, and to generate counterfactual explanations without having to solve another minimisation problem.
arXiv Detail & Related papers (2022-12-21T08:45:32Z) - Causality-based Counterfactual Explanation for Classification Models [11.108866104714627]
We propose a prototype-based counterfactual explanation framework (ProCE)
ProCE is capable of preserving the causal relationship underlying the features of the counterfactual data.
In addition, we design a novel gradient-free optimization based on the multi-objective genetic algorithm that generates the counterfactual explanations.
arXiv Detail & Related papers (2021-05-03T09:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.