Counterfactual Fairness with Disentangled Causal Effect Variational
Autoencoder
- URL: http://arxiv.org/abs/2011.11878v2
- Date: Wed, 9 Dec 2020 09:46:14 GMT
- Title: Counterfactual Fairness with Disentangled Causal Effect Variational
Autoencoder
- Authors: Hyemi Kim, Seungjae Shin, JoonHo Jang, Kyungwoo Song, Weonyoung Joo,
Wanmo Kang, Il-Chul Moon
- Abstract summary: This paper proposes Disentangled Causal Effect Variational Autoencoder (DCEVAE) to solve the problem of fair classification.
We show that our method estimates the total effect and the counterfactual effect without a complete causal graph.
- Score: 26.630680698825632
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The problem of fair classification can be mollified if we develop a method to
remove the embedded sensitive information from the classification features.
This line of separating the sensitive information is developed through the
causal inference, and the causal inference enables the counterfactual
generations to contrast the what-if case of the opposite sensitive attribute.
Along with this separation with the causality, a frequent assumption in the
deep latent causal model defines a single latent variable to absorb the entire
exogenous uncertainty of the causal graph. However, we claim that such
structure cannot distinguish the 1) information caused by the intervention
(i.e., sensitive variable) and 2) information correlated with the intervention
from the data. Therefore, this paper proposes Disentangled Causal Effect
Variational Autoencoder (DCEVAE) to resolve this limitation by disentangling
the exogenous uncertainty into two latent variables: either 1) independent to
interventions or 2) correlated to interventions without causality.
Particularly, our disentangling approach preserves the latent variable
correlated to interventions in generating counterfactual examples. We show that
our method estimates the total effect and the counterfactual effect without a
complete causal graph. By adding a fairness regularization, DCEVAE generates a
counterfactual fair dataset while losing less original information. Also,
DCEVAE generates natural counterfactual images by only flipping sensitive
information. Additionally, we theoretically show the differences in the
covariance structures of DCEVAE and prior works from the perspective of the
latent disentanglement.
Related papers
- Identifiability Guarantees for Causal Disentanglement from Soft
Interventions [26.435199501882806]
Causal disentanglement aims to uncover a representation of data using latent variables that are interrelated through a causal model.
In this paper, we focus on the scenario where unpaired observational and interventional data are available, with each intervention changing the mechanism of a latent variable.
When the causal variables are fully observed, statistically consistent algorithms have been developed to identify the causal model under faithfulness assumptions.
arXiv Detail & Related papers (2023-07-12T15:39:39Z) - A Causal Ordering Prior for Unsupervised Representation Learning [27.18951912984905]
Causal representation learning argues that factors of variation in a dataset are, in fact, causally related.
We propose a fully unsupervised representation learning method that considers a data generation process with a latent additive noise model.
arXiv Detail & Related papers (2023-07-11T18:12:05Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Towards Causal Representation Learning and Deconfounding from Indefinite
Data [17.793702165499298]
Non-statistical data (e.g., images, text, etc.) encounters significant conflicts in terms of properties and methods with traditional causal data.
We redefine causal data from two novel perspectives and then propose three data paradigms.
We implement the above designs as a dynamic variational inference model, tailored to learn causal representation from indefinite data.
arXiv Detail & Related papers (2023-05-04T08:20:37Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z) - Adversarial Robustness through the Lens of Causality [105.51753064807014]
adversarial vulnerability of deep neural networks has attracted significant attention in machine learning.
We propose to incorporate causality into mitigating adversarial vulnerability.
Our method can be seen as the first attempt to leverage causality for mitigating adversarial vulnerability.
arXiv Detail & Related papers (2021-06-11T06:55:02Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z) - Causal Autoregressive Flows [4.731404257629232]
We highlight an intrinsic correspondence between a simple family of autoregressive normalizing flows and identifiable causal models.
We exploit the fact that autoregressive flow architectures define an ordering over variables, analogous to a causal ordering, to show that they are well-suited to performing a range of causal inference tasks.
arXiv Detail & Related papers (2020-11-04T13:17:35Z) - Latent Causal Invariant Model [128.7508609492542]
Current supervised learning can learn spurious correlation during the data-fitting process.
We propose a Latent Causal Invariance Model (LaCIM) which pursues causal prediction.
arXiv Detail & Related papers (2020-11-04T10:00:27Z) - CausalVAE: Structured Causal Disentanglement in Variational Autoencoder [52.139696854386976]
The framework of variational autoencoder (VAE) is commonly used to disentangle independent factors from observations.
We propose a new VAE based framework named CausalVAE, which includes a Causal Layer to transform independent factors into causal endogenous ones.
Results show that the causal representations learned by CausalVAE are semantically interpretable, and their causal relationship as a Directed Acyclic Graph (DAG) is identified with good accuracy.
arXiv Detail & Related papers (2020-04-18T20:09:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.