Computing large deviation prefactors of stochastic dynamical systems
based on machine learning
- URL: http://arxiv.org/abs/2306.11418v1
- Date: Tue, 20 Jun 2023 09:59:45 GMT
- Title: Computing large deviation prefactors of stochastic dynamical systems
based on machine learning
- Authors: Yang Li, Shenglan Yuan, Linghongzhi Lu, Xianbin Liu
- Abstract summary: We present large deviation theory that characterizes the exponential estimate for rare events of dynamical systems in the limit of weak noise.
We design a neural network framework to compute quasipotential, most probable paths and prefactors based on the decomposition of vector field.
Numerical experiments demonstrate its powerful function in exploring internal mechanism of rare events triggered by weak random fluctuations.
- Score: 4.474127100870242
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present large deviation theory that characterizes the
exponential estimate for rare events of stochastic dynamical systems in the
limit of weak noise. We aim to consider next-to-leading-order approximation for
more accurate calculation of mean exit time via computing large deviation
prefactors with the research efforts of machine learning. More specifically, we
design a neural network framework to compute quasipotential, most probable
paths and prefactors based on the orthogonal decomposition of vector field. We
corroborate the higher effectiveness and accuracy of our algorithm with a
practical example. Numerical experiments demonstrate its powerful function in
exploring internal mechanism of rare events triggered by weak random
fluctuations.
Related papers
- Predicting Probabilities of Error to Combine Quantization and Early Exiting: QuEE [68.6018458996143]
We propose a more general dynamic network that can combine both quantization and early exit dynamic network: QuEE.
Our algorithm can be seen as a form of soft early exiting or input-dependent compression.
The crucial factor of our approach is accurate prediction of the potential accuracy improvement achievable through further computation.
arXiv Detail & Related papers (2024-06-20T15:25:13Z) - Estimating Koopman operators with sketching to provably learn large
scale dynamical systems [37.18243295790146]
The theory of Koopman operators allows to deploy non-parametric machine learning algorithms to predict and analyze complex dynamical systems.
We boost the efficiency of different kernel-based Koopman operator estimators using random projections.
We establish non error bounds giving a sharp characterization of the trade-offs between statistical learning rates and computational efficiency.
arXiv Detail & Related papers (2023-06-07T15:30:03Z) - Inexact iterative numerical linear algebra for neural network-based
spectral estimation and rare-event prediction [0.0]
Leading eigenfunctions of the transition operator are useful for visualization.
We develop inexact iterative linear algebra methods for computing these eigenfunctions.
arXiv Detail & Related papers (2023-03-22T13:07:03Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Consistency of mechanistic causal discovery in continuous-time using
Neural ODEs [85.7910042199734]
We consider causal discovery in continuous-time for the study of dynamical systems.
We propose a causal discovery algorithm based on penalized Neural ODEs.
arXiv Detail & Related papers (2021-05-06T08:48:02Z) - A Machine Learning Framework for Computing the Most Probable Paths of
Stochastic Dynamical Systems [5.028470487310566]
We develop a machine learning framework to compute the most probable paths in the sense of Onsager-Machlup action functional theory.
Specifically, we reformulate the boundary value problem of Hamiltonian system and design a prototypical neural network to remedy the shortcomings of shooting method.
arXiv Detail & Related papers (2020-10-01T20:01:37Z) - Vulnerability Under Adversarial Machine Learning: Bias or Variance? [77.30759061082085]
We investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network.
Our analysis sheds light on why the deep neural networks have poor performance under adversarial perturbation.
We introduce a new adversarial machine learning algorithm with lower computational complexity than well-known adversarial machine learning strategies.
arXiv Detail & Related papers (2020-08-01T00:58:54Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.