Stochastic dynamics learning with state-space systems
- URL: http://arxiv.org/abs/2508.07876v1
- Date: Mon, 11 Aug 2025 11:49:01 GMT
- Title: Stochastic dynamics learning with state-space systems
- Authors: Juan-Pablo Ortega, Florian Rossmannek,
- Abstract summary: This work advances the theoretical foundations of reservoir computing (RC) by providing a unified treatment of fading memory and the echo state property (ESP)<n>We investigate state-space systems, a central model class in time series learning, and establish that fading memory and solution stability hold generically -- even in the absence of the ESP.
- Score: 5.248564173595025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work advances the theoretical foundations of reservoir computing (RC) by providing a unified treatment of fading memory and the echo state property (ESP) in both deterministic and stochastic settings. We investigate state-space systems, a central model class in time series learning, and establish that fading memory and solution stability hold generically -- even in the absence of the ESP -- offering a robust explanation for the empirical success of RC models without strict contractivity conditions. In the stochastic case, we critically assess stochastic echo states, proposing a novel distributional perspective rooted in attractor dynamics on the space of probability distributions, which leads to a rich and coherent theory. Our results extend and generalize previous work on non-autonomous dynamical systems, offering new insights into causality, stability, and memory in RC models. This lays the groundwork for reliable generative modeling of temporal data in both deterministic and stochastic regimes.
Related papers
- Dynamic Decision-Making under Model Misspecification: A Stochastic Stability Approach [17.087471640760885]
We study the behavior of one of the most commonly used Bayesian reinforcement learning algorithms, Thompson Sampling, when the model class is misspecified.<n>We first provide a complete dynamic classification of posterior evolution in a misspecified two-armed Gaussian bandit.<n>We then extend the analysis to a general finite model class and develop a unified Markov framework.
arXiv Detail & Related papers (2026-02-19T05:14:09Z) - GTS: Inference-Time Scaling of Latent Reasoning with a Learnable Gaussian Thought Sampler [54.10960908347221]
We model latent thought exploration as conditional sampling from learnable densities and instantiate this idea as a Gaussian Thought Sampler (GTS)<n>GTS predicts context-dependent perturbation distributions over continuous reasoning states and is trained with GRPO-style policy optimization while keeping the backbone frozen.
arXiv Detail & Related papers (2026-02-15T09:57:47Z) - On Forgetting and Stability of Score-based Generative models [6.259598237089842]
Understanding the stability and long-time behavior of generative models is a fundamental problem in modern machine learning.<n>This paper provides quantitative bounds on the sampling error of score-based generative models by leveraging stability and forgetting properties of the Markov chain associated with the reverse-time dynamics.
arXiv Detail & Related papers (2026-01-29T15:37:50Z) - Stochastic Deep Learning: A Probabilistic Framework for Modeling Uncertainty in Structured Temporal Data [0.0]
I propose a novel framework that integrates differential equations (SDEs) with deep generative models to improve uncertainty in machine learning applications involving structured and temporal data.<n>This approach, termed Latent Differential Inference (SLDI), embeds an It SDE in the latent space of a variational autoencoder.<n>The drift and diffusion terms of the SDE are parameterized by neural networks, enabling data-driven inference and generalizing classical time series models to handle irregular sampling and complex dynamic structure.
arXiv Detail & Related papers (2026-01-08T18:53:59Z) - A joint optimization approach to identifying sparse dynamics using least squares kernel collocation [70.13783231186183]
We develop an all-at-once modeling framework for learning systems of ordinary differential equations (ODE) from scarce, partial, and noisy observations of the states.<n>The proposed methodology amounts to a combination of sparse recovery strategies for the ODE over a function library combined with techniques from reproducing kernel Hilbert space (RKHS) theory for estimating the state and discretizing the ODE.
arXiv Detail & Related papers (2025-11-23T18:04:15Z) - A PDE Perspective on Generative Diffusion Models [8.328108675535562]
We develop a rigorous partial differential equation (PDE) framework for score-based diffusion processes.<n>We derive sharp $Lp$-stability estimates for the associated score-based Fokker-Planck dynamics.<n>Results yield a theoretical guarantee that, under exact guidance, diffusion trajectories return to the data manifold.
arXiv Detail & Related papers (2025-11-08T09:19:25Z) - Data-assimilated model-informed reinforcement learning [3.4748713192043876]
In practice, sensors often provide only partial and noisy measurements (obations) of the system.<n>We propose a framework that enables the control of chaotic systems with partial and noisy observability.<n>We show that DA-MIRL successfully estimates and suppresses the chaotic dynamics of the environment in real time from partial observations and approximate models.
arXiv Detail & Related papers (2025-06-02T15:02:26Z) - Overcoming Dimensional Factorization Limits in Discrete Diffusion Models through Quantum Joint Distribution Learning [79.65014491424151]
We propose a quantum Discrete Denoising Diffusion Probabilistic Model (QD3PM)<n>It enables joint probability learning through diffusion and denoising in exponentially large Hilbert spaces.<n>This paper establishes a new theoretical paradigm in generative models by leveraging the quantum advantage in joint distribution learning.
arXiv Detail & Related papers (2025-05-08T11:48:21Z) - Dynamics and Computational Principles of Echo State Networks: A Mathematical Perspective [13.135043580306224]
Reservoir computing (RC) represents a class of state-space models (SSMs) characterized by a fixed state transition mechanism (the reservoir) and a flexible readout layer that maps from the state space.<n>This work presents a systematic exploration of RC, addressing its foundational properties such as the echo state property, fading memory, and reservoir capacity through the lens of dynamical systems theory.<n>We formalize the interplay between input signals and reservoir states, demonstrating the conditions under which reservoirs exhibit stability and expressive power.
arXiv Detail & Related papers (2025-04-16T04:28:05Z) - Latent Space Energy-based Neural ODEs [73.01344439786524]
This paper introduces novel deep dynamical models designed to represent continuous-time sequences.<n>We train the model using maximum likelihood estimation with Markov chain Monte Carlo.<n> Experimental results on oscillating systems, videos and real-world state sequences (MuJoCo) demonstrate that our model with the learnable energy-based prior outperforms existing counterparts.
arXiv Detail & Related papers (2024-09-05T18:14:22Z) - Accurate deep learning-based filtering for chaotic dynamics by identifying instabilities without an ensemble [0.5936407204316615]
We investigate the ability to discover data assimilation schemes meant for chaotic dynamics with deep learning.
The focus is on learning the analysis step of DA, from state trajectories and their observations, using a simple residual convolutional neural network.
arXiv Detail & Related papers (2024-08-08T19:44:57Z) - Stability-Certified Learning of Control Systems with Quadratic
Nonlinearities [9.599029891108229]
This work primarily focuses on an operator inference methodology aimed at constructing low-dimensional dynamical models.
Our main objective is to develop a method that facilitates the inference of quadratic control dynamical systems with inherent stability guarantees.
arXiv Detail & Related papers (2024-03-01T16:26:47Z) - On Uncertainty in Deep State Space Models for Model-Based Reinforcement
Learning [21.63642325390798]
We show that RSSMs use a suboptimal inference scheme and that models trained using this inference overestimate the aleatoric uncertainty of the ground truth system.
We propose an alternative approach building on well-understood components for modeling aleatoric and epistemic uncertainty, dubbed Variational Recurrent Kalman Network (VRKN)
Our experiments show that using the VRKN instead of the RSSM improves performance in tasks where appropriately capturing aleatoric uncertainty is crucial.
arXiv Detail & Related papers (2022-10-17T16:59:48Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Stochastically forced ensemble dynamic mode decomposition for
forecasting and analysis of near-periodic systems [65.44033635330604]
We introduce a novel load forecasting method in which observed dynamics are modeled as a forced linear system.
We show that its use of intrinsic linear dynamics offers a number of desirable properties in terms of interpretability and parsimony.
Results are presented for a test case using load data from an electrical grid.
arXiv Detail & Related papers (2020-10-08T20:25:52Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z) - Semiparametric Bayesian Forecasting of Spatial Earthquake Occurrences [77.68028443709338]
We propose a fully Bayesian formulation of the Epidemic Type Aftershock Sequence (ETAS) model.
The occurrence of the mainshock earthquakes in a geographical region is assumed to follow an inhomogeneous spatial point process.
arXiv Detail & Related papers (2020-02-05T10:11:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.