General Identifiability and Achievability for Causal Representation
Learning
- URL: http://arxiv.org/abs/2310.15450v2
- Date: Wed, 14 Feb 2024 16:37:05 GMT
- Title: General Identifiability and Achievability for Causal Representation
Learning
- Authors: Burak Var{\i}c{\i}, Emre Acart\"urk, Karthikeyan Shanmugam, Ali Tajer
- Abstract summary: The paper establishes identifiability and achievability results using two hard uncoupled interventions per node in the latent causal graph.
For identifiability, the paper establishes that perfect recovery of the latent causal model and variables is guaranteed under uncoupled interventions.
The analysis, additionally, recovers the identifiability result for two hard coupled interventions, that is when metadata about the pair of environments that have the same node intervened is known.
- Score: 33.80247458590611
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper focuses on causal representation learning (CRL) under a general
nonparametric latent causal model and a general transformation model that maps
the latent data to the observational data. It establishes identifiability and
achievability results using two hard uncoupled interventions per node in the
latent causal graph. Notably, one does not know which pair of intervention
environments have the same node intervened (hence, uncoupled). For
identifiability, the paper establishes that perfect recovery of the latent
causal model and variables is guaranteed under uncoupled interventions. For
achievability, an algorithm is designed that uses observational and
interventional data and recovers the latent causal model and variables with
provable guarantees. This algorithm leverages score variations across different
environments to estimate the inverse of the transformer and, subsequently, the
latent variables. The analysis, additionally, recovers the identifiability
result for two hard coupled interventions, that is when metadata about the pair
of environments that have the same node intervened is known. This paper also
shows that when observational data is available, additional faithfulness
assumptions that are adopted by the existing literature are unnecessary.
Related papers
- Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Score-based Causal Representation Learning: Linear and General Transformations [31.786444957887472]
The paper addresses both the identifiability and achievability aspects.
It designs a score-based class of algorithms that ensures both identifiability and achievability.
Results are empirically validated via experiments on structured synthetic data and image data.
arXiv Detail & Related papers (2024-02-01T18:40:03Z) - Learning Causal Representations from General Environments:
Identifiability and Intrinsic Ambiguity [27.630223763160515]
We provide the first identifiability results based on data that stem from general environments.
We show that for linear causal models, while the causal graph can be fully recovered, the latent variables are only identified up to the surrounded-node ambiguity (SNA)
We also propose an algorithm, textttLiNGCReL which provably recovers the ground-truth model up to SNA.
arXiv Detail & Related papers (2023-11-21T01:09:11Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Identifiability Guarantees for Causal Disentanglement from Soft
Interventions [26.435199501882806]
Causal disentanglement aims to uncover a representation of data using latent variables that are interrelated through a causal model.
In this paper, we focus on the scenario where unpaired observational and interventional data are available, with each intervention changing the mechanism of a latent variable.
When the causal variables are fully observed, statistically consistent algorithms have been developed to identify the causal model under faithfulness assumptions.
arXiv Detail & Related papers (2023-07-12T15:39:39Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Score-based Causal Representation Learning with Interventions [54.735484409244386]
This paper studies the causal representation learning problem when latent causal variables are observed indirectly.
The objectives are: (i) recovering the unknown linear transformation (up to scaling) and (ii) determining the directed acyclic graph (DAG) underlying the latent variables.
arXiv Detail & Related papers (2023-01-19T18:39:48Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z) - Semiparametric Inference For Causal Effects In Graphical Models With
Hidden Variables [13.299431908881425]
Identification theory for causal effects in causal models associated with hidden variable directed acyclic graphs is well studied.
corresponding algorithms are underused due to the complexity of estimating the identifying functionals they output.
We bridge the gap between identification and estimation of population-level causal effects involving a single treatment and a single outcome.
arXiv Detail & Related papers (2020-03-27T22:29:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.