Causal Representation Meets Stochastic Modeling under Generic Geometry
- URL: http://arxiv.org/abs/2602.05033v1
- Date: Wed, 04 Feb 2026 20:40:53 GMT
- Title: Causal Representation Meets Stochastic Modeling under Generic Geometry
- Authors: Jiaxu Ren, Yixin Wang, Biwei Huang,
- Abstract summary: We develop causal representation learning for continuous-time latent point processes.<n>We develop MUTATE, an identifiable variational autoencoder framework with a time-adaptive transition module.<n>Across simulated and empirical studies, we find that MUTATE can effectively answer scientific questions.
- Score: 49.24293444627916
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning meaningful causal representations from observations has emerged as a crucial task for facilitating machine learning applications and driving scientific discoveries in fields such as climate science, biology, and physics. This process involves disentangling high-level latent variables and their causal relationships from low-level observations. Previous work in this area that achieves identifiability typically focuses on cases where the observations are either i.i.d. or follow a latent discrete-time process. Nevertheless, many real-world settings require identifying latent variables that are continuous-time stochastic processes (e.g., multivariate point processes). To this end, we develop identifiable causal representation learning for continuous-time latent stochastic point processes. We study its identifiability by analyzing the geometry of the parameter space. Furthermore, we develop MUTATE, an identifiable variational autoencoder framework with a time-adaptive transition module to infer stochastic dynamics. Across simulated and empirical studies, we find that MUTATE can effectively answer scientific questions, such as the accumulation of mutations in genomics and the mechanisms driving neuron spike triggers in response to time-varying dynamics.
Related papers
- Neural models for prediction of spatially patterned phase transitions: methods and challenges [0.37282630026096597]
Early Warning Signal (EWS) detection has shown promise in identifying dynamical signatures of oncoming critical transitions.<n>This paper explores the successes and shortcomings of neural EWS detection for spatially phase patterned transitions.
arXiv Detail & Related papers (2025-05-14T18:24:15Z) - Identifiable Representation and Model Learning for Latent Dynamic Systems [0.0]
We study the problem of identifiable representation and model learning for latent dynamic systems.<n>We prove that, for linear and affine nonlinear latent dynamic systems with sparse input matrices, it is possible to identify the latent variables up to scaling.
arXiv Detail & Related papers (2024-10-23T13:55:42Z) - Causal Representation Learning in Temporal Data via Single-Parent Decoding [66.34294989334728]
Scientific research often seeks to understand the causal structure underlying high-level variables in a system.
Scientists typically collect low-level measurements, such as geographically distributed temperature readings.
We propose a differentiable method, Causal Discovery with Single-parent Decoding, that simultaneously learns the underlying latents and a causal graph over them.
arXiv Detail & Related papers (2024-10-09T15:57:50Z) - On the Identification of Temporally Causal Representation with Instantaneous Dependence [50.14432597910128]
Temporally causal representation learning aims to identify the latent causal process from time series observations.
Most methods require the assumption that the latent causal processes do not have instantaneous relations.
We propose an textbfIDentification framework for instantanetextbfOus textbfLatent dynamics.
arXiv Detail & Related papers (2024-05-24T08:08:05Z) - Learning minimal representations of stochastic processes with
variational autoencoders [52.99137594502433]
We introduce an unsupervised machine learning approach to determine the minimal set of parameters required to describe a process.
Our approach enables for the autonomous discovery of unknown parameters describing processes.
arXiv Detail & Related papers (2023-07-21T14:25:06Z) - A Deep Learning Approach to Analyzing Continuous-Time Systems [20.89961728689037]
We show that deep learning can be used to analyze complex processes.
Our approach relaxes standard assumptions that are implausible for many natural systems.
We demonstrate substantial improvements on behavioral and neuroimaging data.
arXiv Detail & Related papers (2022-09-25T03:02:31Z) - DriPP: Driven Point Processes to Model Stimuli Induced Patterns in M/EEG
Signals [62.997667081978825]
We develop a novel statistical point process model-called driven temporal point processes (DriPP)
We derive a fast and principled expectation-maximization (EM) algorithm to estimate the parameters of this model.
Results on standard MEG datasets demonstrate that our methodology reveals event-related neural responses.
arXiv Detail & Related papers (2021-12-08T13:07:21Z) - Counterfactual Temporal Point Processes [18.37409880250174]
We develop a causal model of thinning for temporal point processes that builds upon the Gumbel-Max structural causal model.
We then simulate counterfactual realizations of the temporal point process under a given alternative intensity function.
arXiv Detail & Related papers (2021-11-15T08:46:25Z) - Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA [81.4991350761909]
Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
arXiv Detail & Related papers (2021-07-21T14:22:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.