Score-based Causal Representation Learning with Interventions
- URL: http://arxiv.org/abs/2301.08230v2
- Date: Mon, 1 May 2023 14:16:46 GMT
- Title: Score-based Causal Representation Learning with Interventions
- Authors: Burak Varici, Emre Acarturk, Karthikeyan Shanmugam, Abhishek Kumar,
Ali Tajer
- Abstract summary: This paper studies the causal representation learning problem when latent causal variables are observed indirectly.
The objectives are: (i) recovering the unknown linear transformation (up to scaling) and (ii) determining the directed acyclic graph (DAG) underlying the latent variables.
- Score: 54.735484409244386
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies the causal representation learning problem when the latent
causal variables are observed indirectly through an unknown linear
transformation. The objectives are: (i) recovering the unknown linear
transformation (up to scaling) and (ii) determining the directed acyclic graph
(DAG) underlying the latent variables. Sufficient conditions for DAG recovery
are established, and it is shown that a large class of non-linear models in the
latent space (e.g., causal mechanisms parameterized by two-layer neural
networks) satisfy these conditions. These sufficient conditions ensure that the
effect of an intervention can be detected correctly from changes in the score.
Capitalizing on this property, recovering a valid transformation is facilitated
by the following key property: any valid transformation renders latent
variables' score function to necessarily have the minimal variations across
different interventional environments. This property is leveraged for perfect
recovery of the latent DAG structure using only \emph{soft} interventions. For
the special case of stochastic \emph{hard} interventions, with an additional
hypothesis testing step, one can also uniquely recover the linear
transformation up to scaling and a valid causal ordering.
Related papers
- Causality Pursuit from Heterogeneous Environments via Neural Adversarial Invariance Learning [12.947265104477237]
Pursuing causality from data is a fundamental problem in scientific discovery, treatment intervention, and transfer learning.
The proposed Focused Adversial Invariant Regularization (FAIR) framework utilizes an innovative minimax optimization approach.
It is shown that FAIR-NN can find the invariant variables and quasi-causal variables under a minimal identification condition.
arXiv Detail & Related papers (2024-05-07T23:37:40Z) - Score-based Causal Representation Learning: Linear and General Transformations [31.786444957887472]
The paper addresses both the identifiability and achievability aspects.
It designs a score-based class of algorithms that ensures both identifiability and achievability.
Results are empirically validated via experiments on structured synthetic data and image data.
arXiv Detail & Related papers (2024-02-01T18:40:03Z) - General Identifiability and Achievability for Causal Representation
Learning [33.80247458590611]
The paper establishes identifiability and achievability results using two hard uncoupled interventions per node in the latent causal graph.
For identifiability, the paper establishes that perfect recovery of the latent causal model and variables is guaranteed under uncoupled interventions.
The analysis, additionally, recovers the identifiability result for two hard coupled interventions, that is when metadata about the pair of environments that have the same node intervened is known.
arXiv Detail & Related papers (2023-10-24T01:47:44Z) - Posterior Collapse and Latent Variable Non-identifiability [54.842098835445]
We propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility.
Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
arXiv Detail & Related papers (2023-01-02T06:16:56Z) - Linear Causal Disentanglement via Interventions [8.444187296409051]
Causal disentanglement seeks a representation of data involving latent variables that relate to one another via a causal model.
We study observed variables that are a linear transformation of a linear latent causal model.
arXiv Detail & Related papers (2022-11-29T18:43:42Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Disentangling Generative Factors of Physical Fields Using Variational
Autoencoders [0.0]
This work explores the use of variational autoencoders (VAEs) for non-linear dimension reduction.
A disentangled decomposition is interpretable and can be transferred to a variety of tasks including generative modeling.
arXiv Detail & Related papers (2021-09-15T16:02:43Z) - Topographic VAEs learn Equivariant Capsules [84.33745072274942]
We introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables.
We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
We demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
arXiv Detail & Related papers (2021-09-03T09:25:57Z) - Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA [81.4991350761909]
Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
arXiv Detail & Related papers (2021-07-21T14:22:14Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.