Invariance & Causal Representation Learning: Prospects and Limitations
- URL: http://arxiv.org/abs/2312.03580v1
- Date: Wed, 6 Dec 2023 16:16:31 GMT
- Title: Invariance & Causal Representation Learning: Prospects and Limitations
- Authors: Simon Bing, Jonas Wahl, Urmi Ninad, Jakob Runge
- Abstract summary: In causal models, a given mechanism is assumed to be invariant to changes of other mechanisms.
We show that invariance alone is insufficient to identify latent causal variables.
- Score: 15.935205681539145
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In causal models, a given mechanism is assumed to be invariant to changes of
other mechanisms. While this principle has been utilized for inference in
settings where the causal variables are observed, theoretical insights when the
variables of interest are latent are largely missing. We assay the connection
between invariance and causal representation learning by establishing
impossibility results which show that invariance alone is insufficient to
identify latent causal variables. Together with practical considerations, we
use these theoretical findings to highlight the need for additional constraints
in order to identify representations by exploiting invariance.
Related papers
- Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Nonlinearity, Feedback and Uniform Consistency in Causal Structural
Learning [0.8158530638728501]
Causal Discovery aims to find automated search methods for learning causal structures from observational data.
This thesis focuses on two questions in causal discovery: (i) providing an alternative definition of k-Triangle Faithfulness that (i) is weaker than strong faithfulness when applied to the Gaussian family of distributions, and (ii) under the assumption that the modified version of Strong Faithfulness holds.
arXiv Detail & Related papers (2023-08-15T01:23:42Z) - On the Use of Generative Models in Observational Causal Analysis [0.0]
The use of a hypothetical generative model was been suggested for causal analysis of observational data.
The model describes a single observable distribution and cannot a chain of effects of intervention that deviates from the observed distribution.
arXiv Detail & Related papers (2023-06-07T21:29:49Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Restricted Hidden Cardinality Constraints in Causal Models [0.0]
Causal models with unobserved variables impose nontrivial constraints on the distributions over the observed variables.
We consider causal models with a promise that unobserved variables have known cardinalities.
arXiv Detail & Related papers (2021-09-13T00:52:08Z) - Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA [81.4991350761909]
Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
arXiv Detail & Related papers (2021-07-21T14:22:14Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z) - Latent Causal Invariant Model [128.7508609492542]
Current supervised learning can learn spurious correlation during the data-fitting process.
We propose a Latent Causal Invariance Model (LaCIM) which pursues causal prediction.
arXiv Detail & Related papers (2020-11-04T10:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.