Identifiable Latent Polynomial Causal Models Through the Lens of Change
- URL: http://arxiv.org/abs/2310.15580v1
- Date: Tue, 24 Oct 2023 07:46:10 GMT
- Title: Identifiable Latent Polynomial Causal Models Through the Lens of Change
- Authors: Yuhang Liu, Zhen Zhang, Dong Gong, Mingming Gong, Biwei Huang, Anton
van den Hengel, Kun Zhang, Javen Qinfeng Shi
- Abstract summary: Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
- Score: 85.67870425656368
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal representation learning aims to unveil latent high-level causal
representations from observed low-level data. One of its primary tasks is to
provide reliable assurance of identifying these latent causal models, known as
identifiability. A recent breakthrough explores identifiability by leveraging
the change of causal influences among latent causal variables across multiple
environments \citep{liu2022identifying}. However, this progress rests on the
assumption that the causal relationships among latent causal variables adhere
strictly to linear Gaussian models. In this paper, we extend the scope of
latent causal models to involve nonlinear causal relationships, represented by
polynomial models, and general noise distributions conforming to the
exponential family. Additionally, we investigate the necessity of imposing
changes on all causal parameters and present partial identifiability results
when part of them remains unchanged. Further, we propose a novel empirical
estimation method, grounded in our theoretical finding, that enables learning
consistent latent causal representations. Our experimental results, obtained
from both synthetic and real-world data, validate our theoretical contributions
concerning identifiability and consistency.
Related papers
- Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Nonlinearity, Feedback and Uniform Consistency in Causal Structural
Learning [0.8158530638728501]
Causal Discovery aims to find automated search methods for learning causal structures from observational data.
This thesis focuses on two questions in causal discovery: (i) providing an alternative definition of k-Triangle Faithfulness that (i) is weaker than strong faithfulness when applied to the Gaussian family of distributions, and (ii) under the assumption that the modified version of Strong Faithfulness holds.
arXiv Detail & Related papers (2023-08-15T01:23:42Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Towards Causal Representation Learning and Deconfounding from Indefinite
Data [17.793702165499298]
Non-statistical data (e.g., images, text, etc.) encounters significant conflicts in terms of properties and methods with traditional causal data.
We redefine causal data from two novel perspectives and then propose three data paradigms.
We implement the above designs as a dynamic variational inference model, tailored to learn causal representation from indefinite data.
arXiv Detail & Related papers (2023-05-04T08:20:37Z) - Weight-variant Latent Causal Models [79.79711624326299]
Causal representation learning exposes latent high-level causal variables behind low-level observations.
In this work we focus on identifying latent causal variables.
We show that the transitivity severely hinders the identifiability of latent causal variables.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal variables.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Causal Discovery in Linear Structural Causal Models with Deterministic
Relations [27.06618125828978]
We focus on the task of causal discovery form observational data.
We derive a set of necessary and sufficient conditions for unique identifiability of the causal structure.
arXiv Detail & Related papers (2021-10-30T21:32:42Z) - Typing assumptions improve identification in causal discovery [123.06886784834471]
Causal discovery from observational data is a challenging task to which an exact solution cannot always be identified.
We propose a new set of assumptions that constrain possible causal relationships based on the nature of the variables.
arXiv Detail & Related papers (2021-07-22T14:23:08Z) - CausalVAE: Structured Causal Disentanglement in Variational Autoencoder [52.139696854386976]
The framework of variational autoencoder (VAE) is commonly used to disentangle independent factors from observations.
We propose a new VAE based framework named CausalVAE, which includes a Causal Layer to transform independent factors into causal endogenous ones.
Results show that the causal representations learned by CausalVAE are semantically interpretable, and their causal relationship as a Directed Acyclic Graph (DAG) is identified with good accuracy.
arXiv Detail & Related papers (2020-04-18T20:09:34Z) - Causal discovery of linear non-Gaussian acyclic models in the presence
of latent confounders [6.1221613913018675]
This paper proposes a causal functional model-based method called repetitive causal discovery (RCD) to discover the causal structure of observed variables affected by latent confounders.
RCD repeats inferring the causal directions between a small number of observed variables and determines whether the relationships are affected by latent confounders.
arXiv Detail & Related papers (2020-01-13T12:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.