SDE Matching: Scalable and Simulation-Free Training of Latent Stochastic Differential Equations
- URL: http://arxiv.org/abs/2502.02472v1
- Date: Tue, 04 Feb 2025 16:47:49 GMT
- Title: SDE Matching: Scalable and Simulation-Free Training of Latent Stochastic Differential Equations
- Authors: Grigory Bartosh, Dmitry Vetrov, Christian A. Naesseth,
- Abstract summary: We propose SDE Matching, a new simulation-free method for training Latent SDEs.
Our results demonstrate that SDE Matching achieves performance comparable to adjoint sensitivity methods.
- Score: 2.1779479916071067
- License:
- Abstract: The Latent Stochastic Differential Equation (SDE) is a powerful tool for time series and sequence modeling. However, training Latent SDEs typically relies on adjoint sensitivity methods, which depend on simulation and backpropagation through approximate SDE solutions, which limit scalability. In this work, we propose SDE Matching, a new simulation-free method for training Latent SDEs. Inspired by modern Score- and Flow Matching algorithms for learning generative dynamics, we extend these ideas to the domain of stochastic dynamics for time series and sequence modeling, eliminating the need for costly numerical simulations. Our results demonstrate that SDE Matching achieves performance comparable to adjoint sensitivity methods while drastically reducing computational complexity.
Related papers
- Neural SDEs as a Unified Approach to Continuous-Domain Sequence Modeling [3.8980564330208662]
We propose a novel and intuitive approach to continuous sequence modeling.
Our method interprets time-series data as textitdiscrete samples from an underlying continuous dynamical system.
We derive a maximum principled objective and a textitsimulation-free scheme for efficient training of our Neural SDE model.
arXiv Detail & Related papers (2025-01-31T03:47:22Z) - MultiPDENet: PDE-embedded Learning with Multi-time-stepping for Accelerated Flow Simulation [48.41289705783405]
We propose a PDE-embedded network with multiscale time stepping (MultiPDENet)
In particular, we design a convolutional filter based on the structure of finite difference with a small number of parameters to optimize.
A Physics Block with a 4th-order Runge-Kutta integrator at the fine time scale is established that embeds the structure of PDEs to guide the prediction.
arXiv Detail & Related papers (2025-01-27T12:15:51Z) - Advancing Generalization in PINNs through Latent-Space Representations [71.86401914779019]
Physics-informed neural networks (PINNs) have made significant strides in modeling dynamical systems governed by partial differential equations (PDEs)
We propose PIDO, a novel physics-informed neural PDE solver designed to generalize effectively across diverse PDE configurations.
We validate PIDO on a range of benchmarks, including 1D combined equations and 2D Navier-Stokes equations.
arXiv Detail & Related papers (2024-11-28T13:16:20Z) - Trajectory Flow Matching with Applications to Clinical Time Series Modeling [77.58277281319253]
Trajectory Flow Matching (TFM) trains a Neural SDE in a simulation-free manner, bypassing backpropagation through the dynamics.
We demonstrate improved performance on three clinical time series datasets in terms of absolute performance and uncertainty prediction.
arXiv Detail & Related papers (2024-10-28T15:54:50Z) - On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - Gaussian Mixture Solvers for Diffusion Models [84.83349474361204]
We introduce a novel class of SDE-based solvers called GMS for diffusion models.
Our solver outperforms numerous SDE-based solvers in terms of sample quality in image generation and stroke-based synthesis.
arXiv Detail & Related papers (2023-11-02T02:05:38Z) - Elucidating the solution space of extended reverse-time SDE for
diffusion models [54.23536653351234]
Diffusion models (DMs) demonstrate potent image generation capabilities in various generative modeling tasks.
Their primary limitation lies in slow sampling speed, requiring hundreds or thousands of sequential function evaluations to generate high-quality images.
We formulate the sampling process as an extended reverse-time SDE, unifying prior explorations into ODEs and SDEs.
We devise fast and training-free samplers, ER-SDE-rs, achieving state-of-the-art performance across all samplers.
arXiv Detail & Related papers (2023-09-12T12:27:17Z) - Continuous-time stochastic gradient descent for optimizing over the
stationary distribution of stochastic differential equations [7.65995376636176]
We develop a new continuous-time gradient descent method for optimizing over the stationary distribution oficity differential equation (SDE) models.
We rigorously prove convergence of the online forward propagation algorithm for linear SDE models and present its numerical results for nonlinear examples.
arXiv Detail & Related papers (2022-02-14T11:45:22Z) - Robust and Scalable SDE Learning: A Functional Perspective [5.642000444047032]
We propose an importance-sampling for probabilities of observations of SDE estimators for the purposes of learning.
The proposed method produces lower-variance estimates compared to algorithms based on SDE.
This facilitates the effective use of large-scale parallel hardware for massive decreases in time.
arXiv Detail & Related papers (2021-10-11T11:36:50Z) - Learning stochastic dynamical systems with neural networks mimicking the
Euler-Maruyama scheme [14.436723124352817]
We propose a data driven approach where parameters of the SDE are represented by a neural network with a built-in SDE integration scheme.
The algorithm is applied to the geometric brownian motion and a version of the Lorenz-63 model.
arXiv Detail & Related papers (2021-05-18T11:41:34Z) - The Seven-League Scheme: Deep learning for large time step Monte Carlo
simulations of stochastic differential equations [0.0]
We propose an accurate data-driven numerical scheme to solve Differential Equations (SDEs)
The SDE discretization is built up by means of a chaos expansion method on the basis of accurately determined (SC) points.
With a method called the compression-decompression and collocation technique, we can drastically reduce the number of neural network functions that have to be learned.
arXiv Detail & Related papers (2020-09-07T16:06:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.