Distributional Equivalence in Linear Non-Gaussian Latent-Variable Cyclic Causal Models: Characterization and Learning
- URL: http://arxiv.org/abs/2603.04780v1
- Date: Thu, 05 Mar 2026 03:57:14 GMT
- Title: Distributional Equivalence in Linear Non-Gaussian Latent-Variable Cyclic Causal Models: Characterization and Learning
- Authors: Haoyue Dai, Immanuel Albrecht, Peter Spirtes, Kun Zhang,
- Abstract summary: We argue that a core obstacle to a general, structural-assumption-free approach is the lack of an equivalence characterization.<n>Key to our approach is a new tool, edge rank constraints, which fills a missing piece in the toolbox for latent-variable causal discovery.
- Score: 13.891913455492697
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal discovery with latent variables is a fundamental task. Yet most existing methods rely on strong structural assumptions, such as enforcing specific indicator patterns for latents or restricting how they can interact with others. We argue that a core obstacle to a general, structural-assumption-free approach is the lack of an equivalence characterization: without knowing what can be identified, one generally cannot design methods for how to identify it. In this work, we aim to close this gap for linear non-Gaussian models. We establish the graphical criterion for when two graphs with arbitrary latent structure and cycles are distributionally equivalent, that is, they induce the same observed distribution set. Key to our approach is a new tool, edge rank constraints, which fills a missing piece in the toolbox for latent-variable causal discovery in even broader settings. We further provide a procedure to traverse the whole equivalence class and develop an algorithm to recover models from data up to such equivalence. To our knowledge, this is the first equivalence characterization with latent variables in any parametric setting without structural assumptions, and hence the first structural-assumption-free discovery method. Code and an interactive demo are available at https://equiv.cc.
Related papers
- Differentiable Structure Learning and Causal Discovery for General Binary Data [22.58355875817396]
We propose a differentiable structure learning framework that is capable of capturing arbitrary dependencies among discrete variables.<n>We formulate the learning problem as a single differentiable optimization task in the most general form.<n> Empirical results demonstrate that our approach effectively captures complex relationships in discrete data.
arXiv Detail & Related papers (2025-09-25T22:26:55Z) - Trek-Based Parameter Identification for Linear Causal Models With Arbitrarily Structured Latent Variables [1.4425878137951234]
We develop a criterion to certify whether causal effects are identifiable in linear structural equation models with latent variables.<n>Our novel latent-subgraph criterion is a purely graphical condition that is sufficient for identifiability of causal effects.
arXiv Detail & Related papers (2025-07-24T08:10:44Z) - Learning Discrete Latent Variable Structures with Tensor Rank Conditions [30.292492090200984]
Unobserved discrete data are ubiquitous in many scientific disciplines, and how to learn the causal structure of these latent variables is crucial for uncovering data patterns.
Most studies focus on the linear latent variable model or impose strict constraints on latent structures, which fail to address cases in discrete data involving non-linear relationships or complex latent structures.
We explore a tensor rank condition on contingency tables for an observed variable set $mathbfX_p$, showing that the rank is determined by the minimum support of a specific conditional set.
One can locate the latent variable through probing the rank on different observed variables
arXiv Detail & Related papers (2024-06-11T07:25:17Z) - Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - Causal Discovery under Latent Class Confounding [2.1749194587826026]
We show that globally confounded causal structures can still be identifiable with arbitrary structural equations and noise functions.
We demonstrate that globally confounded causal structures can still be identifiable with arbitrary structural equations and noise functions.
arXiv Detail & Related papers (2023-11-13T16:35:34Z) - Identification of Nonlinear Latent Hierarchical Models [38.925635086396596]
We develop an identification criterion in the form of novel identifiability guarantees for an elementary latent variable model.
To the best of our knowledge, our work is the first to establish identifiability guarantees for both causal structures and latent variables in nonlinear latent hierarchical models.
arXiv Detail & Related papers (2023-06-13T17:19:37Z) - Learning nonparametric latent causal graphs with unknown interventions [18.6470340274888]
We establish conditions under which latent causal graphs are nonparametrically identifiable.
We do not assume the number of hidden variables is known, and we show that at most one unknown intervention per hidden variable is needed.
arXiv Detail & Related papers (2023-06-05T14:06:35Z) - Learning Linear Causal Representations from Interventions under General
Nonlinear Mixing [52.66151568785088]
We prove strong identifiability results given unknown single-node interventions without access to the intervention targets.
This is the first instance of causal identifiability from non-paired interventions for deep neural network embeddings.
arXiv Detail & Related papers (2023-06-04T02:32:12Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA [81.4991350761909]
Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
arXiv Detail & Related papers (2021-07-21T14:22:14Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.