A Machine Learning Framework for Computing the Most Probable Paths of
Stochastic Dynamical Systems
- URL: http://arxiv.org/abs/2010.04114v2
- Date: Fri, 25 Dec 2020 02:36:29 GMT
- Title: A Machine Learning Framework for Computing the Most Probable Paths of
Stochastic Dynamical Systems
- Authors: Yang Li, Jinqiao Duan and Xianbin Liu
- Abstract summary: We develop a machine learning framework to compute the most probable paths in the sense of Onsager-Machlup action functional theory.
Specifically, we reformulate the boundary value problem of Hamiltonian system and design a prototypical neural network to remedy the shortcomings of shooting method.
- Score: 5.028470487310566
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The emergence of transition phenomena between metastable states induced by
noise plays a fundamental role in a broad range of nonlinear systems. The
computation of the most probable paths is a key issue to understand the
mechanism of transition behaviors. Shooting method is a common technique for
this purpose to solve the Euler-Lagrange equation for the associated action
functional, while losing its efficacy in high-dimensional systems. In the
present work, we develop a machine learning framework to compute the most
probable paths in the sense of Onsager-Machlup action functional theory.
Specifically, we reformulate the boundary value problem of Hamiltonian system
and design a neural network to remedy the shortcomings of shooting method. The
successful applications of our algorithms to several prototypical examples
demonstrate its efficacy and accuracy for stochastic systems with both
(Gaussian) Brownian noise and (non-Gaussian) L\'evy noise. This novel approach
is effective in exploring the internal mechanisms of rare events triggered by
random fluctuations in various scientific fields.
Related papers
- Learning Controlled Stochastic Differential Equations [61.82896036131116]
This work proposes a novel method for estimating both drift and diffusion coefficients of continuous, multidimensional, nonlinear controlled differential equations with non-uniform diffusion.
We provide strong theoretical guarantees, including finite-sample bounds for (L2), (Linfty), and risk metrics, with learning rates adaptive to coefficients' regularity.
Our method is available as an open-source Python library.
arXiv Detail & Related papers (2024-11-04T11:09:58Z) - Sparse identification of quasipotentials via a combined data-driven method [4.599618895656792]
We leverage on machine learning via the combination of two data-driven techniques, namely a neural network and a sparse regression algorithm, to obtain symbolic expressions of quasipotential functions.
We show that our approach discovers a parsimonious quasipotential equation for an archetypal model with a known exact quasipotential and for the dynamics of a nanomechanical resonator.
arXiv Detail & Related papers (2024-07-06T11:27:52Z) - Linearization Turns Neural Operators into Function-Valued Gaussian Processes [23.85470417458593]
We introduce a new framework for approximate Bayesian uncertainty quantification in neural operators.
Our approach can be interpreted as a probabilistic analogue of the concept of currying from functional programming.
We showcase the efficacy of our approach through applications to different types of partial differential equations.
arXiv Detail & Related papers (2024-06-07T16:43:54Z) - Learning minimal representations of stochastic processes with
variational autoencoders [52.99137594502433]
We introduce an unsupervised machine learning approach to determine the minimal set of parameters required to describe a process.
Our approach enables for the autonomous discovery of unknown parameters describing processes.
arXiv Detail & Related papers (2023-07-21T14:25:06Z) - Computing large deviation prefactors of stochastic dynamical systems
based on machine learning [4.474127100870242]
We present large deviation theory that characterizes the exponential estimate for rare events of dynamical systems in the limit of weak noise.
We design a neural network framework to compute quasipotential, most probable paths and prefactors based on the decomposition of vector field.
Numerical experiments demonstrate its powerful function in exploring internal mechanism of rare events triggered by weak random fluctuations.
arXiv Detail & Related papers (2023-06-20T09:59:45Z) - Dynamic Bayesian Learning and Calibration of Spatiotemporal Mechanistic
System [0.0]
We develop an approach for fully learning and calibration of mechanistic models based on noisy observations.
We demonstrate this flexibility through solving problems arising in the analysis of ordinary and partial nonlinear differential equations.
arXiv Detail & Related papers (2022-08-12T23:17:46Z) - Learning effective dynamics from data-driven stochastic systems [2.4578723416255754]
This work is devoted to investigating the effective dynamics for slow-fast dynamical systems.
We propose a novel algorithm including a neural network called Auto-SDE to learn in slow manifold.
arXiv Detail & Related papers (2022-05-09T09:56:58Z) - Decimation technique for open quantum systems: a case study with
driven-dissipative bosonic chains [62.997667081978825]
Unavoidable coupling of quantum systems to external degrees of freedom leads to dissipative (non-unitary) dynamics.
We introduce a method to deal with these systems based on the calculation of (dissipative) lattice Green's function.
We illustrate the power of this method with several examples of driven-dissipative bosonic chains of increasing complexity.
arXiv Detail & Related papers (2022-02-15T19:00:09Z) - Structure-Preserving Learning Using Gaussian Processes and Variational
Integrators [62.31425348954686]
We propose the combination of a variational integrator for the nominal dynamics of a mechanical system and learning residual dynamics with Gaussian process regression.
We extend our approach to systems with known kinematic constraints and provide formal bounds on the prediction uncertainty.
arXiv Detail & Related papers (2021-12-10T11:09:29Z) - Quantum algorithms for quantum dynamics: A performance study on the
spin-boson model [68.8204255655161]
Quantum algorithms for quantum dynamics simulations are traditionally based on implementing a Trotter-approximation of the time-evolution operator.
variational quantum algorithms have become an indispensable alternative, enabling small-scale simulations on present-day hardware.
We show that, despite providing a clear reduction of quantum gate cost, the variational method in its current implementation is unlikely to lead to a quantum advantage.
arXiv Detail & Related papers (2021-08-09T18:00:05Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.