Trek-Based Parameter Identification for Linear Causal Models With Arbitrarily Structured Latent Variables
- URL: http://arxiv.org/abs/2507.18170v1
- Date: Thu, 24 Jul 2025 08:10:44 GMT
- Title: Trek-Based Parameter Identification for Linear Causal Models With Arbitrarily Structured Latent Variables
- Authors: Nils Sturma, Mathias Drton,
- Abstract summary: We develop a criterion to certify whether causal effects are identifiable in linear structural equation models with latent variables.<n>Our novel latent-subgraph criterion is a purely graphical condition that is sufficient for identifiability of causal effects.
- Score: 1.4425878137951234
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop a criterion to certify whether causal effects are identifiable in linear structural equation models with latent variables. Linear structural equation models correspond to directed graphs whose nodes represent the random variables of interest and whose edges are weighted with linear coefficients that correspond to direct causal effects. In contrast to previous identification methods, we do not restrict ourselves to settings where the latent variables constitute independent latent factors (i.e., to source nodes in the graphical representation of the model). Our novel latent-subgraph criterion is a purely graphical condition that is sufficient for identifiability of causal effects by rational formulas in the covariance matrix. To check the latent-subgraph criterion, we provide a sound and complete algorithm that operates by solving an integer linear program. While it targets effects involving observed variables, our new criterion is also useful for identifying effects between latent variables, as it allows one to transform the given model into a simpler measurement model for which other existing tools become applicable.
Related papers
- Unfolding Tensors to Identify the Graph in Discrete Latent Bipartite Graphical Models [1.7132914341329848]
We use a tensor unfolding technique to prove a new identifiability result for discrete bipartite graphical models.<n>Our result has useful implications for these models' trustworthy applications in scientific disciplines and interpretable machine learning.
arXiv Detail & Related papers (2025-01-18T23:08:25Z) - On the Parameter Identifiability of Partially Observed Linear Causal Models [23.08796869216895]
We investigate whether the edge coefficients can be recovered given the causal structure and partially observed data.<n>We identify three types of indeterminacy for the parameters in partially observed linear causal models.<n>We propose a novel likelihood-based parameter estimation method that addresses the variance indeterminacy of latent variables in a specific way.
arXiv Detail & Related papers (2024-07-24T03:43:55Z) - Parameter identification in linear non-Gaussian causal models under general confounding [8.273471398838533]
We study identification of the linear coefficients when such models contain latent variables.
Our main result is a graphical criterion that is necessary and sufficient for deciding generic identifiability of direct causal effects.
We report on estimations based on the identification result, explore a generalization to models with feedback loops, and provide new results on the identifiability of the causal graph.
arXiv Detail & Related papers (2024-05-31T14:39:14Z) - Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.<n>One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Score-based Causal Representation Learning with Interventions [54.735484409244386]
This paper studies the causal representation learning problem when latent causal variables are observed indirectly.
The objectives are: (i) recovering the unknown linear transformation (up to scaling) and (ii) determining the directed acyclic graph (DAG) underlying the latent variables.
arXiv Detail & Related papers (2023-01-19T18:39:48Z) - Linear Causal Disentanglement via Interventions [8.444187296409051]
Causal disentanglement seeks a representation of data involving latent variables that relate to one another via a causal model.
We study observed variables that are a linear transformation of a linear latent causal model.
arXiv Detail & Related papers (2022-11-29T18:43:42Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Learning latent causal graphs via mixture oracles [40.71943453524747]
We study the problem of reconstructing a causal graphical model from data in the presence of latent variables.
The main problem of interest is recovering the causal structure over the latent variables while allowing for general, potentially nonlinear dependence between the variables.
arXiv Detail & Related papers (2021-06-29T16:53:34Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z) - Probabilistic Circuits for Variational Inference in Discrete Graphical
Models [101.28528515775842]
Inference in discrete graphical models with variational methods is difficult.
Many sampling-based methods have been proposed for estimating Evidence Lower Bound (ELBO)
We propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN)
We show that selective-SPNs are suitable as an expressive variational distribution, and prove that when the log-density of the target model is aweighted the corresponding ELBO can be computed analytically.
arXiv Detail & Related papers (2020-10-22T05:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.