De-Biasing Generative Models using Counterfactual Methods
- URL: http://arxiv.org/abs/2207.01575v2
- Date: Tue, 5 Jul 2022 16:18:29 GMT
- Title: De-Biasing Generative Models using Counterfactual Methods
- Authors: Sunay Bhat, Jeffrey Jiang, Omead Pooladzandi, Gregory Pottie
- Abstract summary: We propose a new decoder based framework named the Causal Counterfactual Generative Model (CCGM)
Our proposed method combines a causal latent space VAE model with specific modification to emphasize causal fidelity.
We explore how better disentanglement of causal learning and encoding/decoding generates higher causal intervention quality.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variational autoencoders (VAEs) and other generative methods have garnered
growing interest not just for their generative properties but also for the
ability to dis-entangle a low-dimensional latent variable space. However, few
existing generative models take causality into account. We propose a new
decoder based framework named the Causal Counterfactual Generative Model
(CCGM), which includes a partially trainable causal layer in which a part of a
causal model can be learned without significantly impacting reconstruction
fidelity. By learning the causal relationships between image semantic labels or
tabular variables, we can analyze biases, intervene on the generative model,
and simulate new scenarios. Furthermore, by modifying the causal structure, we
can generate samples outside the domain of the original training data and use
such counterfactual models to de-bias datasets. Thus, datasets with known
biases can still be used to train the causal generative model and learn the
causal relationships, but we can produce de-biased datasets on the generative
side. Our proposed method combines a causal latent space VAE model with
specific modification to emphasize causal fidelity, enabling finer control over
the causal layer and the ability to learn a robust intervention framework. We
explore how better disentanglement of causal learning and encoding/decoding
generates higher causal intervention quality. We also compare our model against
similar research to demonstrate the need for explicit generative de-biasing
beyond interventions. Our initial experiments show that our model can generate
images and tabular data with high fidelity to the causal framework and
accommodate explicit de-biasing to ignore undesired relationships in the causal
data compared to the baseline.
Related papers
- From Identifiable Causal Representations to Controllable Counterfactual Generation: A Survey on Causal Generative Modeling [17.074858228123706]
We focus on fundamental theory, methodology, drawbacks, datasets, and metrics.
We cover applications of causal generative models in fairness, privacy, out-of-distribution generalization, precision medicine, and biological sciences.
arXiv Detail & Related papers (2023-10-17T05:45:32Z) - Discovering Mixtures of Structural Causal Models from Time Series Data [23.18511951330646]
We propose a general variational inference-based framework called MCD to infer the underlying causal models.
Our approach employs an end-to-end training process that maximizes an evidence-lower bound for the data likelihood.
We demonstrate that our method surpasses state-of-the-art benchmarks in causal discovery tasks.
arXiv Detail & Related papers (2023-10-10T05:13:10Z) - Less is More: Mitigate Spurious Correlations for Open-Domain Dialogue
Response Generation Models by Causal Discovery [52.95935278819512]
We conduct the first study on spurious correlations for open-domain response generation models based on a corpus CGDIALOG curated in our work.
Inspired by causal discovery algorithms, we propose a novel model-agnostic method for training and inference of response generation model.
arXiv Detail & Related papers (2023-03-02T06:33:48Z) - Hypothesis Testing using Causal and Causal Variational Generative Models [0.0]
Causal Gen and Causal Variational Gen can utilize nonparametric structural causal knowledge combined with a deep learning functional approximation.
We show how, using a deliberate (non-random) split of training and testing data, these models can generalize better to similar, but out-of-distribution data points.
We validate our methods on a synthetic pendulum dataset, as well as a trauma surgery ground level fall dataset.
arXiv Detail & Related papers (2022-10-20T13:46:15Z) - On the Generalization and Adaption Performance of Causal Models [99.64022680811281]
Differentiable causal discovery has proposed to factorize the data generating process into a set of modules.
We study the generalization and adaption performance of such modular neural causal models.
Our analysis shows that the modular neural causal models outperform other models on both zero and few-shot adaptation in low data regimes.
arXiv Detail & Related papers (2022-06-09T17:12:32Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z) - Causal Inference with Deep Causal Graphs [0.0]
Parametric causal modelling techniques rarely provide functionality for counterfactual estimation.
Deep Causal Graphs is an abstract specification of the required functionality for a neural network to model causal distributions.
We demonstrate its expressive power in modelling complex interactions and showcase applications to machine learning explainability and fairness.
arXiv Detail & Related papers (2020-06-15T13:03:33Z) - Bayesian Sparse Factor Analysis with Kernelized Observations [67.60224656603823]
Multi-view problems can be faced with latent variable models.
High-dimensionality and non-linear issues are traditionally handled by kernel methods.
We propose merging both approaches into single model.
arXiv Detail & Related papers (2020-06-01T14:25:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.