Learning Latent Causal Dynamics
- URL: http://arxiv.org/abs/2202.04828v2
- Date: Fri, 11 Feb 2022 03:14:47 GMT
- Title: Learning Latent Causal Dynamics
- Authors: Weiran Yao, Guangyi Chen and Kun Zhang
- Abstract summary: We propose a principled framework, called LiLY, to first recover time-delayed latent causal variables.
We then identify their relations from measured temporal data under different distribution shifts.
The correction step is then formulated as learning the low-dimensional change factors with a few samples.
- Score: 14.762231867144065
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: One critical challenge of time-series modeling is how to learn and quickly
correct the model under unknown distribution shifts. In this work, we propose a
principled framework, called LiLY, to first recover time-delayed latent causal
variables and identify their relations from measured temporal data under
different distribution shifts. The correction step is then formulated as
learning the low-dimensional change factors with a few samples from the new
environment, leveraging the identified causal structure. Specifically, the
framework factorizes unknown distribution shifts into transition distribution
changes caused by fixed dynamics and time-varying latent causal relations, and
by global changes in observation. We establish the identifiability theories of
nonparametric latent causal dynamics from their nonlinear mixtures under fixed
dynamics and under changes. Through experiments, we show that time-delayed
latent causal influences are reliably identified from observed variables under
different distribution changes. By exploiting this modular representation of
changes, we can efficiently learn to correct the model under unknown
distribution shifts with only a few samples.
Related papers
- Causal Temporal Representation Learning with Nonstationary Sparse Transition [22.6420431022419]
Causal Temporal Representation Learning (Ctrl) methods aim to identify the temporal causal dynamics of complex nonstationary temporal sequences.
This work adopts a sparse transition assumption, aligned with intuitive human understanding, and presents identifiability results from a theoretical perspective.
We introduce a novel framework, Causal Temporal Representation Learning with Nonstationary Sparse Transition (CtrlNS), designed to leverage the constraints on transition sparsity.
arXiv Detail & Related papers (2024-09-05T00:38:27Z) - On the Identification of Temporally Causal Representation with Instantaneous Dependence [50.14432597910128]
Temporally causal representation learning aims to identify the latent causal process from time series observations.
Most methods require the assumption that the latent causal processes do not have instantaneous relations.
We propose an textbfIDentification framework for instantanetextbfOus textbfLatent dynamics.
arXiv Detail & Related papers (2024-05-24T08:08:05Z) - Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Nonlinearity, Feedback and Uniform Consistency in Causal Structural
Learning [0.8158530638728501]
Causal Discovery aims to find automated search methods for learning causal structures from observational data.
This thesis focuses on two questions in causal discovery: (i) providing an alternative definition of k-Triangle Faithfulness that (i) is weaker than strong faithfulness when applied to the Gaussian family of distributions, and (ii) under the assumption that the modified version of Strong Faithfulness holds.
arXiv Detail & Related papers (2023-08-15T01:23:42Z) - Disentanglement of Latent Representations via Causal Interventions [11.238098505498165]
We introduce a new method for disentanglement inspired by causal dynamics.
Our model considers the quantized vectors as causal variables and links them in a causal graph.
It performs causal interventions on the graph and generates atomic transitions affecting a unique factor of variation in the image.
arXiv Detail & Related papers (2023-02-02T04:37:29Z) - Temporally Disentangled Representation Learning [14.762231867144065]
It is unknown whether the underlying latent variables and their causal relations are identifiable if they have arbitrary, nonparametric causal influences in between.
We propose textbftextttTDRL, a principled framework to recover time-delayed latent causal variables.
Our approach considerably outperforms existing baselines that do not correctly exploit this modular representation of changes.
arXiv Detail & Related papers (2022-10-24T23:02:49Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.