ISALT: Inference-based schemes adaptive to large time-stepping for
locally Lipschitz ergodic systems
- URL: http://arxiv.org/abs/2102.12669v1
- Date: Thu, 25 Feb 2021 03:51:58 GMT
- Title: ISALT: Inference-based schemes adaptive to large time-stepping for
locally Lipschitz ergodic systems
- Authors: Xingjie Li, Fei Lu, Felix X.-F. Ye
- Abstract summary: We introduce a framework to construct inference-based schemes adaptive to large time-steps from data.
We show that ISALT can tolerate time-step magnitudes larger than plain numerical schemes.
It reaches optimal accuracy in the invariant measure when the time-step is medium-large.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Efficient simulation of SDEs is essential in many applications, particularly
for ergodic systems that demand efficient simulation of both short-time
dynamics and large-time statistics. However, locally Lipschitz SDEs often
require special treatments such as implicit schemes with small time-steps to
accurately simulate the ergodic measure. We introduce a framework to construct
inference-based schemes adaptive to large time-steps (ISALT) from data,
achieving a reduction in time by several orders of magnitudes. The key is the
statistical learning of an approximation to the infinite-dimensional
discrete-time flow map. We explore the use of numerical schemes (such as the
Euler-Maruyama, a hybrid RK4, and an implicit scheme) to derive informed basis
functions, leading to a parameter inference problem. We introduce a scalable
algorithm to estimate the parameters by least squares, and we prove the
convergence of the estimators as data size increases.
We test the ISALT on three non-globally Lipschitz SDEs: the 1D double-well
potential, a 2D multi-scale gradient system, and the 3D stochastic Lorenz
equation with degenerate noise. Numerical results show that ISALT can tolerate
time-step magnitudes larger than plain numerical schemes. It reaches optimal
accuracy in reproducing the invariant measure when the time-step is
medium-large.
Related papers
- Enhancing Computational Efficiency in Multiscale Systems Using Deep Learning of Coordinates and Flow Maps [0.0]
This paper showcases how deep learning techniques can be used to develop a precise time-stepping approach for multiscale systems.
The resulting framework achieves state-of-the-art predictive accuracy while incurring lesser computational costs.
arXiv Detail & Related papers (2024-04-28T14:05:13Z) - GPS-Gaussian: Generalizable Pixel-wise 3D Gaussian Splatting for Real-time Human Novel View Synthesis [70.24111297192057]
We present a new approach, termed GPS-Gaussian, for synthesizing novel views of a character in a real-time manner.
The proposed method enables 2K-resolution rendering under a sparse-view camera setting.
arXiv Detail & Related papers (2023-12-04T18:59:55Z) - High-dimensional scaling limits and fluctuations of online least-squares SGD with smooth covariance [16.652085114513273]
We derive high-dimensional scaling limits and fluctuations for the online least-squares Gradient Descent (SGD) algorithm.
Our results have several applications, including characterization of the limiting mean-square estimation or prediction errors and their fluctuations.
arXiv Detail & Related papers (2023-04-03T03:50:00Z) - Gaussian process regression and conditional Karhunen-Lo\'{e}ve models
for data assimilation in inverse problems [68.8204255655161]
We present a model inversion algorithm, CKLEMAP, for data assimilation and parameter estimation in partial differential equation models.
The CKLEMAP method provides better scalability compared to the standard MAP method.
arXiv Detail & Related papers (2023-01-26T18:14:12Z) - Numerically Stable Sparse Gaussian Processes via Minimum Separation
using Cover Trees [57.67528738886731]
We study the numerical stability of scalable sparse approximations based on inducing points.
For low-dimensional tasks such as geospatial modeling, we propose an automated method for computing inducing points satisfying these conditions.
arXiv Detail & Related papers (2022-10-14T15:20:17Z) - FaDIn: Fast Discretized Inference for Hawkes Processes with General
Parametric Kernels [82.53569355337586]
This work offers an efficient solution to temporal point processes inference using general parametric kernels with finite support.
The method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG)
Results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
arXiv Detail & Related papers (2022-10-10T12:35:02Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - Time Series Forecasting Using Manifold Learning [6.316185724124034]
We address a three-tier numerical framework based on manifold learning for the forecasting of high-dimensional time series.
At the first step, we embed the time series into a reduced low-dimensional space using a nonlinear manifold learning algorithm.
At the second step, we construct reduced-order regression models on the manifold to forecast the embedded dynamics.
At the final step, we lift the embedded time series back to the original high-dimensional space.
arXiv Detail & Related papers (2021-10-07T17:09:59Z) - Fast Distributionally Robust Learning with Variance Reduced Min-Max
Optimization [85.84019017587477]
Distributionally robust supervised learning is emerging as a key paradigm for building reliable machine learning systems for real-world applications.
Existing algorithms for solving Wasserstein DRSL involve solving complex subproblems or fail to make use of gradients.
We revisit Wasserstein DRSL through the lens of min-max optimization and derive scalable and efficiently implementable extra-gradient algorithms.
arXiv Detail & Related papers (2021-04-27T16:56:09Z) - The Seven-League Scheme: Deep learning for large time step Monte Carlo
simulations of stochastic differential equations [0.0]
We propose an accurate data-driven numerical scheme to solve Differential Equations (SDEs)
The SDE discretization is built up by means of a chaos expansion method on the basis of accurately determined (SC) points.
With a method called the compression-decompression and collocation technique, we can drastically reduce the number of neural network functions that have to be learned.
arXiv Detail & Related papers (2020-09-07T16:06:20Z) - Hierarchical Deep Learning of Multiscale Differential Equation
Time-Steppers [5.6385744392820465]
We develop a hierarchy of deep neural network time-steppers to approximate the flow map of the dynamical system over a disparate range of time-scales.
The resulting model is purely data-driven and leverages features of the multiscale dynamics.
We benchmark our algorithm against state-of-the-art methods, such as LSTM, reservoir computing, and clockwork RNN.
arXiv Detail & Related papers (2020-08-22T07:16:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.