Time-vectorized numerical integration for systems of ODEs
- URL: http://arxiv.org/abs/2310.08649v1
- Date: Thu, 12 Oct 2023 18:21:02 GMT
- Title: Time-vectorized numerical integration for systems of ODEs
- Authors: Mark C. Messner and Tianchen Hu and Tianju Chen
- Abstract summary: Stiff systems of ordinary differential equations (ODEs) and sparse training data are common in scientific problems.
This paper describes efficient, implicit, vectorized methods for integrating stiff systems of ordinary differential equations through time.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stiff systems of ordinary differential equations (ODEs) and sparse training
data are common in scientific problems. This paper describes efficient,
implicit, vectorized methods for integrating stiff systems of ordinary
differential equations through time and calculating parameter gradients with
the adjoint method. The main innovation is to vectorize the problem both over
the number of independent times series and over a batch or "chunk" of
sequential time steps, effectively vectorizing the assembly of the implicit
system of ODEs. The block-bidiagonal structure of the linearized implicit
system for the backward Euler method allows for further vectorization using
parallel cyclic reduction (PCR). Vectorizing over both axes of the input data
provides a higher bandwidth of calculations to the computing device, allowing
even problems with comparatively sparse data to fully utilize modern GPUs and
achieving speed ups of greater than 100x, compared to standard, sequential time
integration. We demonstrate the advantages of implicit, vectorized time
integration with several example problems, drawn from both analytical stiff and
non-stiff ODE models as well as neural ODE models. We also describe and provide
a freely available open-source implementation of the methods developed here.
Related papers
- MultiPDENet: PDE-embedded Learning with Multi-time-stepping for Accelerated Flow Simulation [48.41289705783405]
We propose a PDE-embedded network with multiscale time stepping (MultiPDENet)
In particular, we design a convolutional filter based on the structure of finite difference with a small number of parameters to optimize.
A Physics Block with a 4th-order Runge-Kutta integrator at the fine time scale is established that embeds the structure of PDEs to guide the prediction.
arXiv Detail & Related papers (2025-01-27T12:15:51Z) - A Constant Velocity Latent Dynamics Approach for Accelerating Simulation of Stiff Nonlinear Systems [0.0]
Solving stiff ordinary differential equations (StODEs) requires sophisticated numerical solvers, which are often computationally expensive.
In this work, we embark on a different path which involves learning a latent dynamics for StODEs, in which one completely avoids numerical integration.
In other words, the solution of the original dynamics is encoded into a sequence of straight lines which can be decoded back to retrieve the actual solution as and when required.
arXiv Detail & Related papers (2025-01-14T20:32:31Z) - On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - Parallel-in-Time Probabilistic Numerical ODE Solvers [35.716255949521305]
Probabilistic numerical solvers for ordinary differential equations (ODEs) treat the numerical simulation of dynamical systems as problems of Bayesian state estimation.
We build on the time-parallel formulation of iterated extended Kalman smoothers to formulate a parallel-in-time probabilistic numerical ODE solver.
arXiv Detail & Related papers (2023-10-02T12:32:21Z) - Discovering ordinary differential equations that govern time-series [65.07437364102931]
We propose a transformer-based sequence-to-sequence model that recovers scalar autonomous ordinary differential equations (ODEs) in symbolic form from time-series data of a single observed solution of the ODE.
Our method is efficiently scalable: after one-time pretraining on a large set of ODEs, we can infer the governing laws of a new observed solution in a few forward passes of the model.
arXiv Detail & Related papers (2022-11-05T07:07:58Z) - Constraining Gaussian Processes to Systems of Linear Ordinary
Differential Equations [5.33024001730262]
LODE-GPs follow a system of linear homogeneous ODEs with constant coefficients.
We show the effectiveness of LODE-GPs in a number of experiments.
arXiv Detail & Related papers (2022-08-26T09:16:53Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - A Probabilistic State Space Model for Joint Inference from Differential
Equations and Data [23.449725313605835]
We show a new class of solvers for ordinary differential equations (ODEs) that phrase the solution process directly in terms of Bayesian filtering.
It then becomes possible to perform approximate Bayesian inference on the latent force as well as the ODE solution in a single, linear complexity pass of an extended Kalman filter.
We demonstrate the expressiveness and performance of the algorithm by training a non-parametric SIRD model on data from the COVID-19 outbreak.
arXiv Detail & Related papers (2021-03-18T10:36:09Z) - Multi-objective discovery of PDE systems using evolutionary approach [77.34726150561087]
In the paper, a multi-objective co-evolution algorithm is described.
The single equations within the system and the system itself are evolved simultaneously to obtain the system.
In contrast to the single vector equation, a component-wise system is more suitable for expert interpretation and, therefore, for applications.
arXiv Detail & Related papers (2021-03-11T15:37:52Z) - DiffPD: Differentiable Projective Dynamics with Contact [65.88720481593118]
We present DiffPD, an efficient differentiable soft-body simulator with implicit time integration.
We evaluate the performance of DiffPD and observe a speedup of 4-19 times compared to the standard Newton's method in various applications.
arXiv Detail & Related papers (2021-01-15T00:13:33Z) - Non-intrusive surrogate modeling for parametrized time-dependent PDEs
using convolutional autoencoders [0.0]
We present a non-intrusive surrogate modeling scheme based on machine learning for predictive modeling of complex, systems by parametrized timedependent PDEs.
We use a convolutional autoencoder in conjunction with a feed forward neural network to establish a low-cost and accurate mapping from problem's parametric space to its solution space.
arXiv Detail & Related papers (2021-01-14T11:34:58Z) - Hierarchical Deep Learning of Multiscale Differential Equation
Time-Steppers [5.6385744392820465]
We develop a hierarchy of deep neural network time-steppers to approximate the flow map of the dynamical system over a disparate range of time-scales.
The resulting model is purely data-driven and leverages features of the multiscale dynamics.
We benchmark our algorithm against state-of-the-art methods, such as LSTM, reservoir computing, and clockwork RNN.
arXiv Detail & Related papers (2020-08-22T07:16:53Z) - Accelerating Feedforward Computation via Parallel Nonlinear Equation
Solving [106.63673243937492]
Feedforward computation, such as evaluating a neural network or sampling from an autoregressive model, is ubiquitous in machine learning.
We frame the task of feedforward computation as solving a system of nonlinear equations. We then propose to find the solution using a Jacobi or Gauss-Seidel fixed-point method, as well as hybrid methods of both.
Our method is guaranteed to give exactly the same values as the original feedforward computation with a reduced (or equal) number of parallelizable iterations, and hence reduced time given sufficient parallel computing power.
arXiv Detail & Related papers (2020-02-10T10:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.