Mechanistic Independence: A Principle for Identifiable Disentangled Representations
- URL: http://arxiv.org/abs/2509.22196v1
- Date: Fri, 26 Sep 2025 10:58:03 GMT
- Title: Mechanistic Independence: A Principle for Identifiable Disentangled Representations
- Authors: Stefan Matthes, Zhiwei Han, Hao Shen,
- Abstract summary: Disentangled representations seek to recover latent factors of variation underlying observed data.<n>We introduce a unified framework in which disentanglement is achieved through mechanistic independence.
- Score: 7.550362088105815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Disentangled representations seek to recover latent factors of variation underlying observed data, yet their identifiability is still not fully understood. We introduce a unified framework in which disentanglement is achieved through mechanistic independence, which characterizes latent factors by how they act on observed variables rather than by their latent distribution. This perspective is invariant to changes of the latent density, even when such changes induce statistical dependencies among factors. Within this framework, we propose several related independence criteria -- ranging from support-based and sparsity-based to higher-order conditions -- and show that each yields identifiability of latent subspaces, even under nonlinear, non-invertible mixing. We further establish a hierarchy among these criteria and provide a graph-theoretic characterization of latent subspaces as connected components. Together, these results clarify the conditions under which disentangled representations can be identified without relying on statistical assumptions.
Related papers
- Sample Complexity of Causal Identification with Temporal Heterogeneity [6.5822033630228916]
We show that temporal structure is shown to effectively substitute for missing environmental diversity.<n>This work shifts the focus from whether causal structure is identifiable to whether it is statistically recoverable in practice.
arXiv Detail & Related papers (2026-02-06T17:44:00Z) - Identification of Causal Direction under an Arbitrary Number of Latent Confounders [54.76982125821112]
In real-world scenarios, observed variables may be affected by multiple latent variables simultaneously.<n>We make use of the joint higher-order cumulant matrix of the observed variables constructed in a specific way.<n>We show that, surprisingly, causal asymmetry between two observed variables can be directly seen from the rank deficiency properties of such higher-order cumulant matrices.
arXiv Detail & Related papers (2025-10-26T15:10:00Z) - Meta-Dependence in Conditional Independence Testing [11.302018782958205]
We study a "meta-dependence" between conditional independence properties using the following geometric intuition.<n>We provide a simple-to-compute measure of this meta-dependence using information projections and consolidate our findings empirically using both synthetic and real-world data.
arXiv Detail & Related papers (2025-04-17T02:41:22Z) - Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - Generalizing Nonlinear ICA Beyond Structural Sparsity [15.450470872782082]
identifiability of nonlinear ICA is known to be impossible without additional assumptions.
Recent advances have proposed conditions on the connective structure from sources to observed variables, known as Structural Sparsity.
We show that even in cases with flexible grouping structures, appropriate identifiability results can be established.
arXiv Detail & Related papers (2023-11-01T21:36:15Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.<n>One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - On the Identifiability of Quantized Factors [33.12356885773274]
We show that it is possible to recover quantized latent factors under a generic nonlinear diffeomorphism.
We introduce this novel form of identifiability, termed quantized factor identifiability, and provide a comprehensive proof of the recovery of the quantized factors.
arXiv Detail & Related papers (2023-06-28T16:10:01Z) - Learning nonparametric latent causal graphs with unknown interventions [18.6470340274888]
We establish conditions under which latent causal graphs are nonparametrically identifiable.
We do not assume the number of hidden variables is known, and we show that at most one unknown intervention per hidden variable is needed.
arXiv Detail & Related papers (2023-06-05T14:06:35Z) - Temporally Disentangled Representation Learning [14.762231867144065]
It is unknown whether the underlying latent variables and their causal relations are identifiable if they have arbitrary, nonparametric causal influences in between.
We propose textbftextttTDRL, a principled framework to recover time-delayed latent causal variables.
Our approach considerably outperforms existing baselines that do not correctly exploit this modular representation of changes.
arXiv Detail & Related papers (2022-10-24T23:02:49Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Weakly Supervised Representation Learning with Sparse Perturbations [82.39171485023276]
We show that if one has weak supervision from observations generated by sparse perturbations of the latent variables, identification is achievable under unknown continuous latent distributions.
We propose a natural estimation procedure based on this theory and illustrate it on low-dimensional synthetic and image-based experiments.
arXiv Detail & Related papers (2022-06-02T15:30:07Z) - Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA [81.4991350761909]
Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
arXiv Detail & Related papers (2021-07-21T14:22:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.