Learning Temporally Causal Latent Processes from General Temporal Data
- URL: http://arxiv.org/abs/2110.05428v1
- Date: Mon, 11 Oct 2021 17:16:19 GMT
- Title: Learning Temporally Causal Latent Processes from General Temporal Data
- Authors: Weiran Yao, Yuewen Sun, Alex Ho, Changyin Sun, Kun Zhang
- Abstract summary: We propose two provable conditions under which temporally causal latent processes can be identified from their nonlinear mixtures.
Experimental results on various data sets demonstrate that temporally causal latent processes are reliably identified from observed variables.
- Score: 22.440008291454287
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Our goal is to recover time-delayed latent causal variables and identify
their relations from measured temporal data. Estimating causally-related latent
variables from observations is particularly challenging as the latent variables
are not uniquely recoverable in the most general case. In this work, we
consider both a nonparametric, nonstationary setting and a parametric setting
for the latent processes and propose two provable conditions under which
temporally causal latent processes can be identified from their nonlinear
mixtures. We propose LEAP, a theoretically-grounded architecture that extends
Variational Autoencoders (VAEs) by enforcing our conditions through proper
constraints in causal process prior. Experimental results on various data sets
demonstrate that temporally causal latent processes are reliably identified
from observed variables under different dependency structures and that our
approach considerably outperforms baselines that do not leverage history or
nonstationarity information. This is one of the first works that successfully
recover time-delayed latent processes from nonlinear mixtures without using
sparsity or minimality assumptions.
Related papers
- On the Identification of Temporally Causal Representation with Instantaneous Dependence [50.14432597910128]
Temporally causal representation learning aims to identify the latent causal process from time series observations.
Most methods require the assumption that the latent causal processes do not have instantaneous relations.
We propose an textbfIDentification framework for instantanetextbfOus textbfLatent dynamics.
arXiv Detail & Related papers (2024-05-24T08:08:05Z) - Causal Inference from Slowly Varying Nonstationary Processes [2.3072402651280517]
Causal inference from observational data hinges on asymmetry between cause and effect from the data generating mechanisms.
We propose a new class of restricted structural causal models, via a time-varying filter and stationary noise, and exploit the asymmetry from nonstationarity for causal identification.
arXiv Detail & Related papers (2024-05-11T04:15:47Z) - DAGnosis: Localized Identification of Data Inconsistencies using
Structures [73.39285449012255]
Identification and appropriate handling of inconsistencies in data at deployment time is crucial to reliably use machine learning models.
We use directed acyclic graphs (DAGs) to encode the training set's features probability distribution and independencies as a structure.
Our method, called DAGnosis, leverages these structural interactions to bring valuable and insightful data-centric conclusions.
arXiv Detail & Related papers (2024-02-26T11:29:16Z) - CaRiNG: Learning Temporal Causal Representation under Non-Invertible Generation Process [22.720927418184672]
We propose a principled approach to learn the CAusal RepresentatIon of Non-invertible Generative temporal data with identifiability guarantees.
Specifically, we utilize temporal context to recover lost latent information and apply the conditions in our theory to guide the training process.
arXiv Detail & Related papers (2024-01-25T22:01:07Z) - Temporally Disentangled Representation Learning under Unknown Nonstationarity [35.195001384417395]
We introduce NCTRL, a principled estimation framework, to reconstruct time-delayed latent causal variables.
Empirical evaluations demonstrated the reliable identification of time-delayed latent causal influences.
arXiv Detail & Related papers (2023-10-28T06:46:03Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Temporally Disentangled Representation Learning [14.762231867144065]
It is unknown whether the underlying latent variables and their causal relations are identifiable if they have arbitrary, nonparametric causal influences in between.
We propose textbftextttTDRL, a principled framework to recover time-delayed latent causal variables.
Our approach considerably outperforms existing baselines that do not correctly exploit this modular representation of changes.
arXiv Detail & Related papers (2022-10-24T23:02:49Z) - Continuous-Time Modeling of Counterfactual Outcomes Using Neural
Controlled Differential Equations [84.42837346400151]
Estimating counterfactual outcomes over time has the potential to unlock personalized healthcare.
Existing causal inference approaches consider regular, discrete-time intervals between observations and treatment decisions.
We propose a controllable simulation environment based on a model of tumor growth for a range of scenarios.
arXiv Detail & Related papers (2022-06-16T17:15:15Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.