Spatiodynamic inference using vision-based generative modelling
- URL: http://arxiv.org/abs/2507.22256v1
- Date: Tue, 29 Jul 2025 22:10:50 GMT
- Title: Spatiodynamic inference using vision-based generative modelling
- Authors: Jun Won Park, Kangyu Zhao, Sanket Rane,
- Abstract summary: We develop a simulation-based inference framework that employs vision transformer-driven encoded variational representations.<n>The central idea is to construct a fine-grained, structured mesh of latent dynamics through systematic exploration of the parameter space.<n>By integrating generative modeling with mechanistic principles, our approach provides a unified inference framework.
- Score: 0.5461938536945723
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Biological systems commonly exhibit complex spatiotemporal patterns whose underlying generative mechanisms pose a significant analytical challenge. Traditional approaches to spatiodynamic inference rely on dimensionality reduction through summary statistics, which sacrifice complexity and interdependent structure intrinsic to these data in favor of parameter identifiability. This imposes a fundamental constraint on reliably extracting mechanistic insights from spatiotemporal data, highlighting the need for analytical frameworks that preserve the full richness of these dynamical systems. To address this, we developed a simulation-based inference framework that employs vision transformer-driven variational encoding to generate compact representations of the data, exploiting the inherent contextual dependencies. These representations are subsequently integrated into a likelihood-free Bayesian approach for parameter inference. The central idea is to construct a fine-grained, structured mesh of latent representations from simulated dynamics through systematic exploration of the parameter space. This encoded mesh of latent embeddings then serves as a reference map for retrieving parameter values that correspond to observed data. By integrating generative modeling with Bayesian principles, our approach provides a unified inference framework to identify both spatial and temporal patterns that manifest in multivariate dynamical systems.
Related papers
- Beyond Static Models: Hypernetworks for Adaptive and Generalizable Forecasting in Complex Parametric Dynamical Systems [0.0]
We introduce the Parametric Hypernetwork for Learning Interpolated Networks (PHLieNet)<n>PHLieNet simultaneously learns a global mapping from the parameter space to a nonlinear embedding and a mapping from the inferred embedding to the weights of a dynamics propagation network.<n>By interpolating in the space of models rather than observations, PHLieNet facilitates smooth transitions across parameterized system behaviors.
arXiv Detail & Related papers (2025-06-24T13:22:49Z) - eXponential FAmily Dynamical Systems (XFADS): Large-scale nonlinear Gaussian state-space modeling [9.52474299688276]
We introduce a low-rank structured variational autoencoder framework for nonlinear state-space graphical models.
We show that our approach consistently demonstrates the ability to learn a more predictive generative model.
arXiv Detail & Related papers (2024-03-03T02:19:49Z) - Mapping the Multiverse of Latent Representations [17.2089620240192]
PRESTO is a principled framework for mapping the multiverse of machine-learning models that rely on latent representations.
Our framework uses persistent homology to characterize the latent spaces arising from different combinations of diverse machine-learning methods.
arXiv Detail & Related papers (2024-02-02T15:54:53Z) - Data-Driven Model Selections of Second-Order Particle Dynamics via
Integrating Gaussian Processes with Low-Dimensional Interacting Structures [0.9821874476902972]
We focus on the data-driven discovery of a general second-order particle-based model.
We present applications to modeling two real-world fish motion datasets.
arXiv Detail & Related papers (2023-11-01T23:45:15Z) - InVAErt networks: a data-driven framework for model synthesis and
identifiability analysis [0.0]
inVAErt is a framework for data-driven analysis and synthesis of physical systems.
It uses a deterministic decoder to represent the forward and inverse maps, a normalizing flow to capture the probabilistic distribution of system outputs, and a variational encoder to learn a compact latent representation for the lack of bijectivity between inputs and outputs.
arXiv Detail & Related papers (2023-07-24T07:58:18Z) - Learning minimal representations of stochastic processes with
variational autoencoders [52.99137594502433]
We introduce an unsupervised machine learning approach to determine the minimal set of parameters required to describe a process.
Our approach enables for the autonomous discovery of unknown parameters describing processes.
arXiv Detail & Related papers (2023-07-21T14:25:06Z) - DIFFormer: Scalable (Graph) Transformers Induced by Energy Constrained
Diffusion [66.21290235237808]
We introduce an energy constrained diffusion model which encodes a batch of instances from a dataset into evolutionary states.
We provide rigorous theory that implies closed-form optimal estimates for the pairwise diffusion strength among arbitrary instance pairs.
Experiments highlight the wide applicability of our model as a general-purpose encoder backbone with superior performance in various tasks.
arXiv Detail & Related papers (2023-01-23T15:18:54Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Capturing Actionable Dynamics with Structured Latent Ordinary
Differential Equations [68.62843292346813]
We propose a structured latent ODE model that captures system input variations within its latent representation.
Building on a static variable specification, our model learns factors of variation for each input to the system, thus separating the effects of the system inputs in the latent space.
arXiv Detail & Related papers (2022-02-25T20:00:56Z) - Towards Robust and Adaptive Motion Forecasting: A Causal Representation
Perspective [72.55093886515824]
We introduce a causal formalism of motion forecasting, which casts the problem as a dynamic process with three groups of latent variables.
We devise a modular architecture that factorizes the representations of invariant mechanisms and style confounders to approximate a causal graph.
Experiment results on synthetic and real datasets show that our three proposed components significantly improve the robustness and reusability of the learned motion representations.
arXiv Detail & Related papers (2021-11-29T18:59:09Z) - Supporting Optimal Phase Space Reconstructions Using Neural Network
Architecture for Time Series Modeling [68.8204255655161]
We propose an artificial neural network with a mechanism to implicitly learn the phase spaces properties.
Our approach is either as competitive as or better than most state-of-the-art strategies.
arXiv Detail & Related papers (2020-06-19T21:04:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.