CaRiNG: Learning Temporal Causal Representation under Non-Invertible Generation Process
- URL: http://arxiv.org/abs/2401.14535v2
- Date: Thu, 30 May 2024 13:09:47 GMT
- Title: CaRiNG: Learning Temporal Causal Representation under Non-Invertible Generation Process
- Authors: Guangyi Chen, Yifan Shen, Zhenhao Chen, Xiangchen Song, Yuewen Sun, Weiran Yao, Xiao Liu, Kun Zhang,
- Abstract summary: We propose a principled approach to learn the CAusal RepresentatIon of Non-invertible Generative temporal data with identifiability guarantees.
Specifically, we utilize temporal context to recover lost latent information and apply the conditions in our theory to guide the training process.
- Score: 22.720927418184672
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Identifying the underlying time-delayed latent causal processes in sequential data is vital for grasping temporal dynamics and making downstream reasoning. While some recent methods can robustly identify these latent causal variables, they rely on strict assumptions about the invertible generation process from latent variables to observed data. However, these assumptions are often hard to satisfy in real-world applications containing information loss. For instance, the visual perception process translates a 3D space into 2D images, or the phenomenon of persistence of vision incorporates historical data into current perceptions. To address this challenge, we establish an identifiability theory that allows for the recovery of independent latent components even when they come from a nonlinear and non-invertible mix. Using this theory as a foundation, we propose a principled approach, CaRiNG, to learn the CAusal RepresentatIon of Non-invertible Generative temporal data with identifiability guarantees. Specifically, we utilize temporal context to recover lost latent information and apply the conditions in our theory to guide the training process. Through experiments conducted on synthetic datasets, we validate that our CaRiNG method reliably identifies the causal process, even when the generation process is non-invertible. Moreover, we demonstrate that our approach considerably improves temporal understanding and reasoning in practical applications.
Related papers
- On the Identification of Temporally Causal Representation with Instantaneous Dependence [50.14432597910128]
Temporally causal representation learning aims to identify the latent causal process from time series observations.
Most methods require the assumption that the latent causal processes do not have instantaneous relations.
We propose an textbfIDentification framework for instantanetextbfOus textbfLatent dynamics.
arXiv Detail & Related papers (2024-05-24T08:08:05Z) - Doubly Robust Structure Identification from Temporal Data [34.00400857111283]
Learning the causes of time-series data is a fundamental task in many applications, spanning from finance to earth sciences or bio-medical applications.
Common approaches for this task are based on vector auto-regression, and they do not take into account unknown confounding between potential causes.
We propose a new doubly robust method for Structure Identification from Temporal Data ( SITD)
arXiv Detail & Related papers (2023-11-10T11:53:42Z) - Temporally Disentangled Representation Learning under Unknown
Nonstationarity [36.71085734964556]
We introduce NCTRL, a principled estimation framework, to reconstruct time-delayed latent causal variables.
Empirical evaluations demonstrated the reliable identification of time-delayed latent causal influences.
arXiv Detail & Related papers (2023-10-28T06:46:03Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [85.67870425656368]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Weakly Supervised Representation Learning with Sparse Perturbations [82.39171485023276]
We show that if one has weak supervision from observations generated by sparse perturbations of the latent variables, identification is achievable under unknown continuous latent distributions.
We propose a natural estimation procedure based on this theory and illustrate it on low-dimensional synthetic and image-based experiments.
arXiv Detail & Related papers (2022-06-02T15:30:07Z) - Principled Knowledge Extrapolation with GANs [92.62635018136476]
We study counterfactual synthesis from a new perspective of knowledge extrapolation.
We show that an adversarial game with a closed-form discriminator can be used to address the knowledge extrapolation problem.
Our method enjoys both elegant theoretical guarantees and superior performance in many scenarios.
arXiv Detail & Related papers (2022-05-21T08:39:42Z) - Causal Discovery from Conditionally Stationary Time Series [18.645887749731923]
State-Dependent Causal Inference (SDCI) is able to recover the underlying causal dependencies, provably with fully-observed states and empirically with hidden states.
improved results over non-causal RNNs on modeling NBA player movements demonstrate the potential of our method.
arXiv Detail & Related papers (2021-10-12T18:12:57Z) - Learning Temporally Causal Latent Processes from General Temporal Data [22.440008291454287]
We propose two provable conditions under which temporally causal latent processes can be identified from their nonlinear mixtures.
Experimental results on various data sets demonstrate that temporally causal latent processes are reliably identified from observed variables.
arXiv Detail & Related papers (2021-10-11T17:16:19Z) - Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA [81.4991350761909]
Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
arXiv Detail & Related papers (2021-07-21T14:22:14Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.