Parallel-in-Time Probabilistic Numerical ODE Solvers
- URL: http://arxiv.org/abs/2310.01145v2
- Date: Wed, 11 Sep 2024 13:09:07 GMT
- Title: Parallel-in-Time Probabilistic Numerical ODE Solvers
- Authors: Nathanael Bosch, Adrien Corenflos, Fatemeh Yaghoobi, Filip Tronarp, Philipp Hennig, Simo Särkkä,
- Abstract summary: Probabilistic numerical solvers for ordinary differential equations (ODEs) treat the numerical simulation of dynamical systems as problems of Bayesian state estimation.
We build on the time-parallel formulation of iterated extended Kalman smoothers to formulate a parallel-in-time probabilistic numerical ODE solver.
- Score: 35.716255949521305
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Probabilistic numerical solvers for ordinary differential equations (ODEs) treat the numerical simulation of dynamical systems as problems of Bayesian state estimation. Aside from producing posterior distributions over ODE solutions and thereby quantifying the numerical approximation error of the method itself, one less-often noted advantage of this formalism is the algorithmic flexibility gained by formulating numerical simulation in the framework of Bayesian filtering and smoothing. In this paper, we leverage this flexibility and build on the time-parallel formulation of iterated extended Kalman smoothers to formulate a parallel-in-time probabilistic numerical ODE solver. Instead of simulating the dynamical system sequentially in time, as done by current probabilistic solvers, the proposed method processes all time steps in parallel and thereby reduces the span cost from linear to logarithmic in the number of time steps. We demonstrate the effectiveness of our approach on a variety of ODEs and compare it to a range of both classic and probabilistic numerical ODE solvers.
Related papers
- On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - Discovering ordinary differential equations that govern time-series [65.07437364102931]
We propose a transformer-based sequence-to-sequence model that recovers scalar autonomous ordinary differential equations (ODEs) in symbolic form from time-series data of a single observed solution of the ODE.
Our method is efficiently scalable: after one-time pretraining on a large set of ODEs, we can infer the governing laws of a new observed solution in a few forward passes of the model.
arXiv Detail & Related papers (2022-11-05T07:07:58Z) - An application of the splitting-up method for the computation of a
neural network representation for the solution for the filtering equations [68.8204255655161]
Filtering equations play a central role in many real-life applications, including numerical weather prediction, finance and engineering.
One of the classical approaches to approximate the solution of the filtering equations is to use a PDE inspired method, called the splitting-up method.
We combine this method with a neural network representation to produce an approximation of the unnormalised conditional distribution of the signal process.
arXiv Detail & Related papers (2022-01-10T11:01:36Z) - Probabilistic Numerical Method of Lines for Time-Dependent Partial
Differential Equations [20.86460521113266]
Current state-of-the-art PDE solvers treat the space- and time-dimensions separately, serially, and with black-box algorithms.
We introduce a probabilistic version of a technique called method of lines to fix this issue.
Joint quantification of space- and time-uncertainty becomes possible without losing the performance benefits of well-tuned ODE solvers.
arXiv Detail & Related papers (2021-10-22T15:26:05Z) - The Connection between Discrete- and Continuous-Time Descriptions of
Gaussian Continuous Processes [60.35125735474386]
We show that discretizations yielding consistent estimators have the property of invariance under coarse-graining'
This result explains why combining differencing schemes for derivatives reconstruction and local-in-time inference approaches does not work for time series analysis of second or higher order differential equations.
arXiv Detail & Related papers (2021-01-16T17:11:02Z) - The Seven-League Scheme: Deep learning for large time step Monte Carlo
simulations of stochastic differential equations [0.0]
We propose an accurate data-driven numerical scheme to solve Differential Equations (SDEs)
The SDE discretization is built up by means of a chaos expansion method on the basis of accurately determined (SC) points.
With a method called the compression-decompression and collocation technique, we can drastically reduce the number of neural network functions that have to be learned.
arXiv Detail & Related papers (2020-09-07T16:06:20Z) - Hierarchical Deep Learning of Multiscale Differential Equation
Time-Steppers [5.6385744392820465]
We develop a hierarchy of deep neural network time-steppers to approximate the flow map of the dynamical system over a disparate range of time-scales.
The resulting model is purely data-driven and leverages features of the multiscale dynamics.
We benchmark our algorithm against state-of-the-art methods, such as LSTM, reservoir computing, and clockwork RNN.
arXiv Detail & Related papers (2020-08-22T07:16:53Z) - Fast Variational Learning in State-Space Gaussian Process Models [29.630197272150003]
We build upon an existing method called conjugate-computation variational inference.
We provide an efficient JAX implementation which exploits just-in-time compilation.
Our approach leads to fast and stable variational inference in state-space GP models that can be scaled to time series with millions of data points.
arXiv Detail & Related papers (2020-07-09T12:06:34Z) - STEER: Simple Temporal Regularization For Neural ODEs [80.80350769936383]
We propose a new regularization technique: randomly sampling the end time of the ODE during training.
The proposed regularization is simple to implement, has negligible overhead and is effective across a wide variety of tasks.
We show through experiments on normalizing flows, time series models and image recognition that the proposed regularization can significantly decrease training time and even improve performance over baseline models.
arXiv Detail & Related papers (2020-06-18T17:44:50Z) - Accelerating Feedforward Computation via Parallel Nonlinear Equation
Solving [106.63673243937492]
Feedforward computation, such as evaluating a neural network or sampling from an autoregressive model, is ubiquitous in machine learning.
We frame the task of feedforward computation as solving a system of nonlinear equations. We then propose to find the solution using a Jacobi or Gauss-Seidel fixed-point method, as well as hybrid methods of both.
Our method is guaranteed to give exactly the same values as the original feedforward computation with a reduced (or equal) number of parallelizable iterations, and hence reduced time given sufficient parallel computing power.
arXiv Detail & Related papers (2020-02-10T10:11:31Z) - Convergence and sample complexity of gradient methods for the model-free
linear quadratic regulator problem [27.09339991866556]
We show that ODE searches for optimal control for an unknown computation system by directly searching over the corresponding space of controllers.
We take a step towards demystifying the performance and efficiency of such methods by focusing on the gradient-flow dynamics set of stabilizing feedback gains and a similar result holds for the forward disctization of the ODE.
arXiv Detail & Related papers (2019-12-26T16:56:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.