Using Machine Learning to predict extreme events in the H\'enon map
- URL: http://arxiv.org/abs/2002.10268v1
- Date: Thu, 20 Feb 2020 15:56:20 GMT
- Title: Using Machine Learning to predict extreme events in the H\'enon map
- Authors: Martin Lellep, Jonathan Prexl, Moritz Linkmann, and Bruno Eckhardt
- Abstract summary: We analyze the performance of one algorithm for the prediction of extreme events in the two-dimensional H'enon map at the classical parameters.
Similar relations between the intrinsic chaotic properties of the dynamics and ML parameters might be observable in other systems as well.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine Learning (ML) inspired algorithms provide a flexible set of tools for
analyzing and forecasting chaotic dynamical systems. We here analyze the
performance of one algorithm for the prediction of extreme events in the
two-dimensional H\'enon map at the classical parameters. The task is to
determine whether a trajectory will exceed a threshold after a set number of
time steps into the future. This task has a geometric interpretation within the
dynamics of the H\'enon map, which we use to gauge the performance of the
neural networks that are used in this work. We analyze the dependence of the
success rate of the ML models on the prediction time $T$ , the number of
training samples $N_T$ and the size of the network $N_p$. We observe that in
order to maintain a certain accuracy, $N_T \propto exp(2 h T)$ and $N_p \propto
exp(hT)$, where $h$ is the topological entropy. Similar relations between the
intrinsic chaotic properties of the dynamics and ML parameters might be
observable in other systems as well.
Related papers
- The Optimization Landscape of SGD Across the Feature Learning Strength [102.1353410293931]
We study the effect of scaling $gamma$ across a variety of models and datasets in the online training setting.
We find that optimal online performance is often found at large $gamma$.
Our findings indicate that analytical study of the large-$gamma$ limit may yield useful insights into the dynamics of representation learning in performant models.
arXiv Detail & Related papers (2024-10-06T22:30:14Z) - Predicting Ground State Properties: Constant Sample Complexity and Deep Learning Algorithms [48.869199703062606]
A fundamental problem in quantum many-body physics is that of finding ground states of local Hamiltonians.
We introduce two approaches that achieve a constant sample complexity, independent of system size $n$, for learning ground state properties.
arXiv Detail & Related papers (2024-05-28T18:00:32Z) - Comparative Analysis of Predicting Subsequent Steps in Hénon Map [0.0]
This study evaluates the performance of different machine learning models in predicting the evolution of the H'enon map.
Results indicate that LSTM network demonstrate superior predictive accuracy, particularly in extreme event prediction.
This research underscores the significance of machine learning in elucidating chaotic dynamics.
arXiv Detail & Related papers (2024-05-15T17:32:31Z) - A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - Convergence of Gradient Descent for Recurrent Neural Networks: A Nonasymptotic Analysis [16.893624100273108]
We analyze recurrent neural networks with diagonal hidden-to-hidden weight matrices trained with gradient descent in the supervised learning setting.
We prove that gradient descent can achieve optimality emphwithout massive over parameterization.
Our results are based on an explicit characterization of the class of dynamical systems that can be approximated and learned by recurrent neural networks.
arXiv Detail & Related papers (2024-02-19T15:56:43Z) - A Dynamical Model of Neural Scaling Laws [79.59705237659547]
We analyze a random feature model trained with gradient descent as a solvable model of network training and generalization.
Our theory shows how the gap between training and test loss can gradually build up over time due to repeated reuse of data.
arXiv Detail & Related papers (2024-02-02T01:41:38Z) - Online Algorithm for Node Feature Forecasting in Temporal Graphs [12.667148739430798]
In this paper, we propose an online mspace for forecasting node features in temporal graphs.
We show that mspace performs at par with the state-of-the-art and even surpasses them on some datasets.
We also propose a technique to generate synthetic datasets to aid in evaluating node feature forecasting methods.
arXiv Detail & Related papers (2024-01-30T07:31:51Z) - Improved machine learning algorithm for predicting ground state
properties [3.156207648146739]
We give a classical machine learning (ML) algorithm for predicting ground state properties with an inductive bias encoding geometric locality.
The proposed ML model can efficiently predict ground state properties of an $n$-qubit gapped local Hamiltonian after learning from only $mathcalO(log(n))$ data.
arXiv Detail & Related papers (2023-01-30T18:40:07Z) - Fast variable selection makes scalable Gaussian process BSS-ANOVA a
speedy and accurate choice for tabular and time series regression [0.0]
Gaussian processes (GPs) are non-parametric regression engines with a long history.
One of a number of scalable GP approaches is the Karhunen-Lo'eve (KL) decomposed kernel BSS-ANOVA, developed in 2009.
A new method of forward variable selection, quickly and effectively limits the number of terms, yielding a method with competitive accuracies.
arXiv Detail & Related papers (2022-05-26T23:41:43Z) - Meta-Learning for Koopman Spectral Analysis with Short Time-series [49.41640137945938]
Existing methods require long time-series for training neural networks.
We propose a meta-learning method for estimating embedding functions from unseen short time-series.
We experimentally demonstrate that the proposed method achieves better performance in terms of eigenvalue estimation and future prediction.
arXiv Detail & Related papers (2021-02-09T07:19:19Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.