Results on Counterfactual Invariance
- URL: http://arxiv.org/abs/2307.08519v1
- Date: Mon, 17 Jul 2023 14:27:32 GMT
- Title: Results on Counterfactual Invariance
- Authors: Jake Fawkes, Robin J. Evans
- Abstract summary: We show that whilst counterfactual invariance implies conditional independence, conditional independence does not give any implications about the degree or likelihood of satisfying counterfactual invariance.
For discrete causal models counterfactually invariant functions are often constrained to be functions of particular variables, or even constant.
- Score: 3.616948583169635
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we provide a theoretical analysis of counterfactual invariance.
We present a variety of existing definitions, study how they relate to each
other and what their graphical implications are. We then turn to the current
major question surrounding counterfactual invariance, how does it relate to
conditional independence? We show that whilst counterfactual invariance implies
conditional independence, conditional independence does not give any
implications about the degree or likelihood of satisfying counterfactual
invariance. Furthermore, we show that for discrete causal models
counterfactually invariant functions are often constrained to be functions of
particular variables, or even constant.
Related papers
- Defining and Measuring Disentanglement for non-Independent Factors of Variation [9.452311793803803]
We give a definition of disentanglement based on information theory that is valid when the factors of variation are not independent.
We propose a method to measure the degree of disentanglement from the given definition that works when the factors of variation are not independent.
arXiv Detail & Related papers (2024-08-13T16:30:36Z) - Invariance & Causal Representation Learning: Prospects and Limitations [15.935205681539145]
In causal models, a given mechanism is assumed to be invariant to changes of other mechanisms.
We show that invariance alone is insufficient to identify latent causal variables.
arXiv Detail & Related papers (2023-12-06T16:16:31Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - On the Strong Correlation Between Model Invariance and Generalization [54.812786542023325]
Generalization captures a model's ability to classify unseen data.
Invariance measures consistency of model predictions on transformations of the data.
From a dataset-centric view, we find a certain model's accuracy and invariance linearly correlated on different test sets.
arXiv Detail & Related papers (2022-07-14T17:08:25Z) - Diagonal Nonlinear Transformations Preserve Structure in Covariance and
Precision Matrices [3.652509571098291]
For a certain class of non-Gaussian distributions, correspondences still hold, exactly for the covariance and approximately for the precision.
The distributions -- sometimes referred to as "nonparanormal" -- are given by diagonal transformations of multivariate normal random variables.
arXiv Detail & Related papers (2021-07-08T22:31:48Z) - Counterfactual Invariance to Spurious Correlations: Why and How to Pass
Stress Tests [87.60900567941428]
A spurious correlation' is the dependence of a model on some aspect of the input data that an analyst thinks shouldn't matter.
In machine learning, these have a know-it-when-you-see-it character.
We study stress testing using the tools of causal inference.
arXiv Detail & Related papers (2021-05-31T14:39:38Z) - Transitional Conditional Independence [0.0]
We introduce transition probability spaces and transitional random variables.
These constructions will generalize, strengthen and previous notions of (conditional) random variables and non-stochastic variables.
arXiv Detail & Related papers (2021-04-23T11:52:15Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z) - Latent Causal Invariant Model [128.7508609492542]
Current supervised learning can learn spurious correlation during the data-fitting process.
We propose a Latent Causal Invariance Model (LaCIM) which pursues causal prediction.
arXiv Detail & Related papers (2020-11-04T10:00:27Z) - CausalVAE: Structured Causal Disentanglement in Variational Autoencoder [52.139696854386976]
The framework of variational autoencoder (VAE) is commonly used to disentangle independent factors from observations.
We propose a new VAE based framework named CausalVAE, which includes a Causal Layer to transform independent factors into causal endogenous ones.
Results show that the causal representations learned by CausalVAE are semantically interpretable, and their causal relationship as a Directed Acyclic Graph (DAG) is identified with good accuracy.
arXiv Detail & Related papers (2020-04-18T20:09:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.