Towards Identifiability of Hierarchical Temporal Causal Representation Learning
- URL: http://arxiv.org/abs/2510.18310v1
- Date: Tue, 21 Oct 2025 05:40:17 GMT
- Title: Towards Identifiability of Hierarchical Temporal Causal Representation Learning
- Authors: Zijian Li, Minghao Fu, Junxian Huang, Yifan Shen, Ruichu Cai, Yuewen Sun, Guangyi Chen, Kun Zhang,
- Abstract summary: We propose a Causally Hierarchical Latent Dynamic (CHiLD) identification framework.<n>We exploit the natural sparsity of the hierarchical structure among latent variables to identify latent variables within each layer.<n>This model incorporates a contextual encoder to reconstruct multi-layer latent variables and normalize flow-based hierarchical prior networks.
- Score: 37.974090366909586
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling hierarchical latent dynamics behind time series data is critical for capturing temporal dependencies across multiple levels of abstraction in real-world tasks. However, existing temporal causal representation learning methods fail to capture such dynamics, as they fail to recover the joint distribution of hierarchical latent variables from \textit{single-timestep observed variables}. Interestingly, we find that the joint distribution of hierarchical latent variables can be uniquely determined using three conditionally independent observations. Building on this insight, we propose a Causally Hierarchical Latent Dynamic (CHiLD) identification framework. Our approach first employs temporal contextual observed variables to identify the joint distribution of multi-layer latent variables. Sequentially, we exploit the natural sparsity of the hierarchical structure among latent variables to identify latent variables within each layer. Guided by the theoretical results, we develop a time series generative model grounded in variational inference. This model incorporates a contextual encoder to reconstruct multi-layer latent variables and normalize flow-based hierarchical prior networks to impose the independent noise condition of hierarchical latent dynamics. Empirical evaluations on both synthetic and real-world datasets validate our theoretical claims and demonstrate the effectiveness of CHiLD in modeling hierarchical latent dynamics.
Related papers
- Learning Time-Aware Causal Representation for Model Generalization in Evolving Domains [50.66049136093248]
We develop a time-aware structural causal model (SCM) that incorporates dynamic causal factors and the causal mechanism drifts.<n>We show that our method can yield the optimal causal predictor for each time domain.<n>Results on both synthetic and real-world datasets exhibit that SYNC can achieve superior temporal generalization performance.
arXiv Detail & Related papers (2025-06-21T14:05:37Z) - Time Series Domain Adaptation via Latent Invariant Causal Mechanism [28.329164754662354]
Time series domain adaptation aims to transfer the complex temporal dependence from the labeled source domain to the unlabeled target domain.<n>Recent advances leverage the stable causal mechanism over observed variables to model the domain-invariant temporal dependence.<n>However, modeling precise causal structures in high-dimensional data, such as videos, remains challenging.
arXiv Detail & Related papers (2025-02-23T16:25:58Z) - Sequential Representation Learning via Static-Dynamic Conditional Disentanglement [58.19137637859017]
This paper explores self-supervised disentangled representation learning within sequential data, focusing on separating time-independent and time-varying factors in videos.
We propose a new model that breaks the usual independence assumption between those factors by explicitly accounting for the causal relationship between the static/dynamic variables.
Experiments show that the proposed approach outperforms previous complex state-of-the-art techniques in scenarios where the dynamics of a scene are influenced by its content.
arXiv Detail & Related papers (2024-08-10T17:04:39Z) - Beyond DAGs: A Latent Partial Causal Model for Multimodal Learning [80.44084021062105]
We propose a novel latent partial causal model for multimodal data, featuring two latent coupled variables, connected by an undirected edge, to represent the transfer of knowledge across modalities.<n>Under specific statistical assumptions, we establish an identifiability result, demonstrating that representations learned by multimodal contrastive learning correspond to the latent coupled variables up to a trivial transformation.<n>Experiments on a pre-trained CLIP model embodies disentangled representations, enabling few-shot learning and improving domain generalization across diverse real-world datasets.
arXiv Detail & Related papers (2024-02-09T07:18:06Z) - Discovering group dynamics in coordinated time series via hierarchical recurrent switching-state models [5.250223406627639]
We seek a computationally efficient model for a collection of time series arising from multiple interacting entities (a.k.a. "agents")<n>Recent models of temporal patterns across individuals fail to incorporate explicit system-level collective behavior that can influence the trajectories of individual entities.<n>We employ a latent system-level discrete state Markov chain that provides top-down influence on latent entity-level chains which in turn govern the emission of each observed time series.
arXiv Detail & Related papers (2024-01-26T16:06:01Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.<n>One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Learning Hierarchical Features with Joint Latent Space Energy-Based
Prior [44.4434704520236]
We study the fundamental problem of multi-layer generator models in learning hierarchical representations.
We propose a joint latent space EBM prior model with multi-layer latent variables for effective hierarchical representation learning.
arXiv Detail & Related papers (2023-10-14T15:44:14Z) - Learning Joint Latent Space EBM Prior Model for Multi-layer Generator [44.4434704520236]
We study the fundamental problem of learning multi-layer generator models.
We propose an energy-based model (EBM) on the joint latent space over all layers of latent variables.
Our experiments demonstrate that the learned model can be expressive in generating high-quality images.
arXiv Detail & Related papers (2023-06-10T00:27:37Z) - Amortised Inference in Structured Generative Models with Explaining Away [16.92791301062903]
We extend the output of amortised variational inference to incorporate structured factors over multiple variables.
We show that appropriately parameterised factors can be combined efficiently with variational message passing in elaborate graphical structures.
We then fit the structured model to high-dimensional neural spiking time-series from the hippocampus of freely moving rodents.
arXiv Detail & Related papers (2022-09-12T12:52:15Z) - Towards Robust and Adaptive Motion Forecasting: A Causal Representation
Perspective [72.55093886515824]
We introduce a causal formalism of motion forecasting, which casts the problem as a dynamic process with three groups of latent variables.
We devise a modular architecture that factorizes the representations of invariant mechanisms and style confounders to approximate a causal graph.
Experiment results on synthetic and real datasets show that our three proposed components significantly improve the robustness and reusability of the learned motion representations.
arXiv Detail & Related papers (2021-11-29T18:59:09Z) - Variational Conditional Dependence Hidden Markov Models for
Skeleton-Based Action Recognition [7.9603223299524535]
This paper revisits conventional sequential modeling approaches, aiming to address the problem of capturing time-varying temporal dependency patterns.
We propose a different formulation of HMMs, whereby the dependence on past frames is dynamically inferred from the data.
We derive a tractable inference algorithm based on the forward-backward algorithm.
arXiv Detail & Related papers (2020-02-13T23:18:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.