Generalizing Nonlinear ICA Beyond Structural Sparsity
- URL: http://arxiv.org/abs/2311.00866v1
- Date: Wed, 1 Nov 2023 21:36:15 GMT
- Title: Generalizing Nonlinear ICA Beyond Structural Sparsity
- Authors: Yujia Zheng, Kun Zhang
- Abstract summary: identifiability of nonlinear ICA is known to be impossible without additional assumptions.
Recent advances have proposed conditions on the connective structure from sources to observed variables, known as Structural Sparsity.
We show that even in cases with flexible grouping structures, appropriate identifiability results can be established.
- Score: 15.450470872782082
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nonlinear independent component analysis (ICA) aims to uncover the true
latent sources from their observable nonlinear mixtures. Despite its
significance, the identifiability of nonlinear ICA is known to be impossible
without additional assumptions. Recent advances have proposed conditions on the
connective structure from sources to observed variables, known as Structural
Sparsity, to achieve identifiability in an unsupervised manner. However, the
sparsity constraint may not hold universally for all sources in practice.
Furthermore, the assumptions of bijectivity of the mixing process and
independence among all sources, which arise from the setting of ICA, may also
be violated in many real-world scenarios. To address these limitations and
generalize nonlinear ICA, we propose a set of new identifiability results in
the general settings of undercompleteness, partial sparsity and source
dependence, and flexible grouping structures. Specifically, we prove
identifiability when there are more observed variables than sources
(undercomplete), and when certain sparsity and/or source independence
assumptions are not met for some changing sources. Moreover, we show that even
in cases with flexible grouping structures (e.g., part of the sources can be
divided into irreducible independent groups with various sizes), appropriate
identifiability results can also be established. Theoretical claims are
supported empirically on both synthetic and real-world datasets.
Related papers
- On the Identifiability of Sparse ICA without Assuming Non-Gaussianity [20.333908367541895]
We develop an identifiability theory that relies on second-order statistics without imposing further preconditions on the distribution of sources.
We propose two estimation methods based on second-order statistics and sparsity constraint.
arXiv Detail & Related papers (2024-08-19T18:51:42Z) - Detecting and Identifying Selection Structure in Sequential Data [53.24493902162797]
We argue that the selective inclusion of data points based on latent objectives is common in practical situations, such as music sequences.
We show that selection structure is identifiable without any parametric assumptions or interventional experiments.
We also propose a provably correct algorithm to detect and identify selection structures as well as other types of dependencies.
arXiv Detail & Related papers (2024-06-29T20:56:34Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Experimental full network nonlocality with independent sources and
strict locality constraints [59.541438315564854]
Nonlocality in networks gives rise to phenomena radically different from that in standard Bell scenarios.
We experimentally observe full network nonlocality in a network where the source-independence, locality, and measurement-independence loopholes are closed.
Our experiment violates known inequalities characterizing non-full network nonlocal correlations by over five standard deviations.
arXiv Detail & Related papers (2023-02-05T20:03:58Z) - On the Identifiability of Nonlinear ICA: Sparsity and Beyond [20.644375143901488]
How to make the nonlinear ICA model identifiable up to certain trivial indeterminacies is a long-standing problem in unsupervised learning.
Recent breakthroughs reformulate the standard independence assumption of sources as conditional independence given some auxiliary variables.
We show that under specific instantiations of such constraints, the independent latent sources can be identified from their nonlinear mixtures up to a permutation.
arXiv Detail & Related papers (2022-06-15T18:24:22Z) - On Finite-Sample Identifiability of Contrastive Learning-Based Nonlinear
Independent Component Analysis [11.012445089716016]
This work puts forth a finite-sample identifiability analysis of GCL-based nICA.
Our framework judiciously combines the properties of the GCL loss function, statistical analysis, and numerical differentiation.
arXiv Detail & Related papers (2022-06-14T04:59:08Z) - Non-Linear Spectral Dimensionality Reduction Under Uncertainty [107.01839211235583]
We propose a new dimensionality reduction framework, called NGEU, which leverages uncertainty information and directly extends several traditional approaches.
We show that the proposed NGEU formulation exhibits a global closed-form solution, and we analyze, based on the Rademacher complexity, how the underlying uncertainties theoretically affect the generalization ability of the framework.
arXiv Detail & Related papers (2022-02-09T19:01:33Z) - Causal Discovery in Linear Structural Causal Models with Deterministic
Relations [27.06618125828978]
We focus on the task of causal discovery form observational data.
We derive a set of necessary and sufficient conditions for unique identifiability of the causal structure.
arXiv Detail & Related papers (2021-10-30T21:32:42Z) - Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA [81.4991350761909]
Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
arXiv Detail & Related papers (2021-07-21T14:22:14Z) - Independent mechanism analysis, a new concept? [3.2548794659022393]
Identifiability can be recovered in settings where additional, typically observed variables are included in the generative process.
We provide theoretical and empirical evidence that our approach circumvents a number of nonidentifiability issues arising in nonlinear blind source separation.
arXiv Detail & Related papers (2021-06-09T16:45:00Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.