Learning nonparametric latent causal graphs with unknown interventions
- URL: http://arxiv.org/abs/2306.02899v2
- Date: Fri, 3 Nov 2023 04:27:59 GMT
- Title: Learning nonparametric latent causal graphs with unknown interventions
- Authors: Yibo Jiang, Bryon Aragam
- Abstract summary: We establish conditions under which latent causal graphs are nonparametrically identifiable.
We do not assume the number of hidden variables is known, and we show that at most one unknown intervention per hidden variable is needed.
- Score: 18.6470340274888
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We establish conditions under which latent causal graphs are
nonparametrically identifiable and can be reconstructed from unknown
interventions in the latent space. Our primary focus is the identification of
the latent structure in measurement models without parametric assumptions such
as linearity or Gaussianity. Moreover, we do not assume the number of hidden
variables is known, and we show that at most one unknown intervention per
hidden variable is needed. This extends a recent line of work on learning
causal representations from observations and interventions. The proofs are
constructive and introduce two new graphical concepts -- imaginary subsets and
isolated edges -- that may be useful in their own right. As a matter of
independent interest, the proofs also involve a novel characterization of the
limits of edge orientations within the equivalence class of DAGs induced by
unknown interventions. These are the first results to characterize the
conditions under which causal representations are identifiable without making
any parametric assumptions in a general setting with unknown interventions and
without faithfulness.
Related papers
- Identifiability Guarantees for Causal Disentanglement from Purely Observational Data [10.482728002416348]
Causal disentanglement aims to learn about latent causal factors behind data.
Recent advances establish identifiability results assuming that interventions on (single) latent factors are available.
We provide a precise characterization of latent factors that can be identified in nonlinear causal models.
arXiv Detail & Related papers (2024-10-31T04:18:29Z) - Learning Causal Representations from General Environments:
Identifiability and Intrinsic Ambiguity [27.630223763160515]
We provide the first identifiability results based on data that stem from general environments.
We show that for linear causal models, while the causal graph can be fully recovered, the latent variables are only identified up to the surrounded-node ambiguity (SNA)
We also propose an algorithm, textttLiNGCReL which provably recovers the ground-truth model up to SNA.
arXiv Detail & Related papers (2023-11-21T01:09:11Z) - A U-turn on Double Descent: Rethinking Parameter Counting in Statistical
Learning [68.76846801719095]
We show that double descent appears exactly when and where it occurs, and that its location is not inherently tied to the threshold p=n.
This provides a resolution to tensions between double descent and statistical intuition.
arXiv Detail & Related papers (2023-10-29T12:05:39Z) - Identification of Nonlinear Latent Hierarchical Models [38.925635086396596]
We develop an identification criterion in the form of novel identifiability guarantees for an elementary latent variable model.
To the best of our knowledge, our work is the first to establish identifiability guarantees for both causal structures and latent variables in nonlinear latent hierarchical models.
arXiv Detail & Related papers (2023-06-13T17:19:37Z) - Learning Linear Causal Representations from Interventions under General
Nonlinear Mixing [52.66151568785088]
We prove strong identifiability results given unknown single-node interventions without access to the intervention targets.
This is the first instance of causal identifiability from non-paired interventions for deep neural network embeddings.
arXiv Detail & Related papers (2023-06-04T02:32:12Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Weakly Supervised Representation Learning with Sparse Perturbations [82.39171485023276]
We show that if one has weak supervision from observations generated by sparse perturbations of the latent variables, identification is achievable under unknown continuous latent distributions.
We propose a natural estimation procedure based on this theory and illustrate it on low-dimensional synthetic and image-based experiments.
arXiv Detail & Related papers (2022-06-02T15:30:07Z) - Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA [81.4991350761909]
Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
arXiv Detail & Related papers (2021-07-21T14:22:14Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.