Latent Representation and Simulation of Markov Processes via Time-Lagged
Information Bottleneck
- URL: http://arxiv.org/abs/2309.07200v2
- Date: Fri, 26 Jan 2024 12:40:27 GMT
- Title: Latent Representation and Simulation of Markov Processes via Time-Lagged
Information Bottleneck
- Authors: Marco Federici, Patrick Forr\'e, Ryota Tomioka, Bastiaan S. Veeling
- Abstract summary: We introduce an inference process that maps complex systems into a simplified representational space and models large jumps in time.
Our experiments demonstrate that T-IB learns information-optimal representations for accurately modeling the statistical properties and dynamics of the original process at a selected time lag.
- Score: 11.998105609001913
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Markov processes are widely used mathematical models for describing dynamic
systems in various fields. However, accurately simulating large-scale systems
at long time scales is computationally expensive due to the short time steps
required for accurate integration. In this paper, we introduce an inference
process that maps complex systems into a simplified representational space and
models large jumps in time. To achieve this, we propose Time-lagged Information
Bottleneck (T-IB), a principled objective rooted in information theory, which
aims to capture relevant temporal features while discarding high-frequency
information to simplify the simulation task and minimize the inference error.
Our experiments demonstrate that T-IB learns information-optimal
representations for accurately modeling the statistical properties and dynamics
of the original process at a selected time lag, outperforming existing
time-lagged dimensionality reduction methods.
Related papers
- On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - Enhancing Computational Efficiency in Multiscale Systems Using Deep Learning of Coordinates and Flow Maps [0.0]
This paper showcases how deep learning techniques can be used to develop a precise time-stepping approach for multiscale systems.
The resulting framework achieves state-of-the-art predictive accuracy while incurring lesser computational costs.
arXiv Detail & Related papers (2024-04-28T14:05:13Z) - Time-Aware Knowledge Representations of Dynamic Objects with
Multidimensional Persistence [41.32931849366751]
We propose a new approach to a time-aware knowledge representation mechanism that focuses on implicit time-dependent topological information.
In particular, we propose a new approach, named textitTemporal MultiPersistence (TMP), which produces multidimensional topological fingerprints of the data.
TMP method improves the computational efficiency of the state-of-the-art multipersistence summaries up to 59.5 times.
arXiv Detail & Related papers (2024-01-24T00:33:53Z) - Spatio-Temporal Branching for Motion Prediction using Motion Increments [55.68088298632865]
Human motion prediction (HMP) has emerged as a popular research topic due to its diverse applications.
Traditional methods rely on hand-crafted features and machine learning techniques.
We propose a noveltemporal-temporal branching network using incremental information for HMP.
arXiv Detail & Related papers (2023-08-02T12:04:28Z) - A Neural PDE Solver with Temporal Stencil Modeling [44.97241931708181]
Recent Machine Learning (ML) models have shown new promises in capturing important dynamics in high-resolution signals.
This study shows that significant information is often lost in the low-resolution down-sampled features.
We propose a new approach, which combines the strengths of advanced time-series sequence modeling and state-of-the-art neural PDE solvers.
arXiv Detail & Related papers (2023-02-16T06:13:01Z) - On Fast Simulation of Dynamical System with Neural Vector Enhanced
Numerical Solver [59.13397937903832]
We introduce a deep learning-based corrector called Neural Vector (NeurVec)
NeurVec can compensate for integration errors and enable larger time step sizes in simulations.
Our experiments on a variety of complex dynamical system benchmarks demonstrate that NeurVec exhibits remarkable generalization capability.
arXiv Detail & Related papers (2022-08-07T09:02:18Z) - Deep Bayesian Active Learning for Accelerating Stochastic Simulation [74.58219903138301]
Interactive Neural Process (INP) is a deep active learning framework for simulations and with active learning approaches.
For active learning, we propose a novel acquisition function, Latent Information Gain (LIG), calculated in the latent space of NP based models.
The results demonstrate STNP outperforms the baselines in the learning setting and LIG achieves the state-of-the-art for active learning.
arXiv Detail & Related papers (2021-06-05T01:31:51Z) - Markov Modeling of Time-Series Data using Symbolic Analysis [8.522582405896653]
We will review the different techniques for discretization and memory estimation for discrete processes.
We will present some results from literature on partitioning from dynamical systems theory and order estimation using concepts of information theory and statistical learning.
arXiv Detail & Related papers (2021-03-20T20:31:21Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Fast and differentiable simulation of driven quantum systems [58.720142291102135]
We introduce a semi-analytic method based on the Dyson expansion that allows us to time-evolve driven quantum systems much faster than standard numerical methods.
We show results of the optimization of a two-qubit gate using transmon qubits in the circuit QED architecture.
arXiv Detail & Related papers (2020-12-16T21:43:38Z) - Hierarchical Deep Learning of Multiscale Differential Equation
Time-Steppers [5.6385744392820465]
We develop a hierarchy of deep neural network time-steppers to approximate the flow map of the dynamical system over a disparate range of time-scales.
The resulting model is purely data-driven and leverages features of the multiscale dynamics.
We benchmark our algorithm against state-of-the-art methods, such as LSTM, reservoir computing, and clockwork RNN.
arXiv Detail & Related papers (2020-08-22T07:16:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.