Model-free prediction of emergence of extreme events in a parametrically
driven nonlinear dynamical system by Deep Learning
- URL: http://arxiv.org/abs/2107.08819v1
- Date: Wed, 14 Jul 2021 14:48:57 GMT
- Title: Model-free prediction of emergence of extreme events in a parametrically
driven nonlinear dynamical system by Deep Learning
- Authors: J.Meiyazhagan, S. Sudharsan, and M. Senthilvelan
- Abstract summary: We predict the emergence of extreme events in a parametrically driven nonlinear dynamical system.
We use three Deep Learning models, namely Multi-Layer Perceptron, Convolutional Neural Network and Long Short-Term Memory.
We find that the Long Short-Term Memory model can serve as the best model to forecast the chaotic time series.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We predict the emergence of extreme events in a parametrically driven
nonlinear dynamical system using three Deep Learning models, namely Multi-Layer
Perceptron, Convolutional Neural Network and Long Short-Term Memory. The Deep
Learning models are trained using the training set and are allowed to predict
the test set data. After prediction, the time series of the actual and the
predicted values are plotted one over the other in order to visualize the
performance of the models. Upon evaluating the Root Mean Square Error value
between predicted and the actual values of all three models, we find that the
Long Short-Term Memory model can serve as the best model to forecast the
chaotic time series and to predict the emergence of extreme events for the
considered system.
Related papers
- Predictive Churn with the Set of Good Models [64.05949860750235]
We study the effect of conflicting predictions over the set of near-optimal machine learning models.
We present theoretical results on the expected churn between models within the Rashomon set.
We show how our approach can be used to better anticipate, reduce, and avoid churn in consumer-facing applications.
arXiv Detail & Related papers (2024-02-12T16:15:25Z) - Probabilistic AutoRegressive Neural Networks for Accurate Long-range
Forecasting [6.295157260756792]
We introduce the Probabilistic AutoRegressive Neural Networks (PARNN)
PARNN is capable of handling complex time series data exhibiting non-stationarity, nonlinearity, non-seasonality, long-range dependence, and chaotic patterns.
We evaluate the performance of PARNN against standard statistical, machine learning, and deep learning models, including Transformers, NBeats, and DeepAR.
arXiv Detail & Related papers (2022-04-01T17:57:36Z) - On Optimal Early Stopping: Over-informative versus Under-informative
Parametrization [13.159777131162961]
We develop theoretical results to reveal the relationship between the optimal early stopping time and model dimension.
We demonstrate experimentally that our theoretical results on optimal early stopping time corresponds to the training process of deep neural networks.
arXiv Detail & Related papers (2022-02-20T18:20:06Z) - Model-assisted deep learning of rare extreme events from partial
observations [0.0]
To predict rare extreme events using deep neural networks, one encounters the so-called small data problem.
Here, we investigate a model-assisted framework where the training data is obtained from numerical simulations.
We find that long short-term memory networks are most robust to noise and to yield relatively accurate predictions.
arXiv Detail & Related papers (2021-11-04T23:24:22Z) - Uncertainty-Aware Time-to-Event Prediction using Deep Kernel Accelerated
Failure Time Models [11.171712535005357]
We propose Deep Kernel Accelerated Failure Time models for the time-to-event prediction task.
Our model shows better point estimate performance than recurrent neural network based baselines in experiments on two real-world datasets.
arXiv Detail & Related papers (2021-07-26T14:55:02Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Learning Interpretable Deep State Space Model for Probabilistic Time
Series Forecasting [98.57851612518758]
Probabilistic time series forecasting involves estimating the distribution of future based on its history.
We propose a deep state space model for probabilistic time series forecasting whereby the non-linear emission model and transition model are parameterized by networks.
We show in experiments that our model produces accurate and sharp probabilistic forecasts.
arXiv Detail & Related papers (2021-01-31T06:49:33Z) - Learning Accurate Long-term Dynamics for Model-based Reinforcement
Learning [7.194382512848327]
We propose a new parametrization to supervised learning on state-action data to stably predict at longer horizons.
Our results in simulated and experimental robotic tasks show that our trajectory-based models yield significantly more accurate long term predictions.
arXiv Detail & Related papers (2020-12-16T18:47:37Z) - A Bayesian Perspective on Training Speed and Model Selection [51.15664724311443]
We show that a measure of a model's training speed can be used to estimate its marginal likelihood.
We verify our results in model selection tasks for linear models and for the infinite-width limit of deep neural networks.
Our results suggest a promising new direction towards explaining why neural networks trained with gradient descent are biased towards functions that generalize well.
arXiv Detail & Related papers (2020-10-27T17:56:14Z) - Generative Temporal Difference Learning for Infinite-Horizon Prediction [101.59882753763888]
We introduce the $gamma$-model, a predictive model of environment dynamics with an infinite probabilistic horizon.
We discuss how its training reflects an inescapable tradeoff between training-time and testing-time compounding errors.
arXiv Detail & Related papers (2020-10-27T17:54:12Z) - A Multi-Channel Neural Graphical Event Model with Negative Evidence [76.51278722190607]
Event datasets are sequences of events of various types occurring irregularly over the time-line.
We propose a non-parametric deep neural network approach in order to estimate the underlying intensity functions.
arXiv Detail & Related papers (2020-02-21T23:10:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.