Comparative Analysis of Predicting Subsequent Steps in Hénon Map
- URL: http://arxiv.org/abs/2405.10190v2
- Date: Thu, 23 May 2024 08:19:28 GMT
- Title: Comparative Analysis of Predicting Subsequent Steps in Hénon Map
- Authors: Vismaya V S, Alok Hareendran, Bharath V Nair, Sishu Shankar Muni, Martin Lellep,
- Abstract summary: This study evaluates the performance of different machine learning models in predicting the evolution of the H'enon map.
Results indicate that LSTM network demonstrate superior predictive accuracy, particularly in extreme event prediction.
This research underscores the significance of machine learning in elucidating chaotic dynamics.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper explores the prediction of subsequent steps in H\'enon Map using various machine learning techniques. The H\'enon map, well known for its chaotic behaviour, finds applications in various fields including cryptography, image encryption, and pattern recognition. Machine learning methods, particularly deep learning, are increasingly essential for understanding and predicting chaotic phenomena. This study evaluates the performance of different machine learning models including Random Forest, Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM) networks, Support Vector Machines (SVM), and Feed Forward Neural Networks (FNN) in predicting the evolution of the H\'enon map. Results indicate that LSTM network demonstrate superior predictive accuracy, particularly in extreme event prediction. Furthermore, a comparison between LSTM and FNN models reveals the LSTM's advantage, especially for longer prediction horizons and larger datasets. This research underscores the significance of machine learning in elucidating chaotic dynamics and highlights the importance of model selection and dataset size in forecasting subsequent steps in chaotic systems.
Related papers
- Hybridization of Persistent Homology with Neural Networks for Time-Series Prediction: A Case Study in Wave Height [0.0]
We introduce a feature engineering method that enhances the predictive performance of neural network models.
Specifically, we leverage computational topology techniques to derive valuable topological features from input data.
For time-ahead predictions, the enhancements in $R2$ score were significant for FNNs, RNNs, LSTM, and GRU models.
arXiv Detail & Related papers (2024-09-03T01:26:21Z) - Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Batch-Ensemble Stochastic Neural Networks for Out-of-Distribution
Detection [55.028065567756066]
Out-of-distribution (OOD) detection has recently received much attention from the machine learning community due to its importance in deploying machine learning models in real-world applications.
In this paper we propose an uncertainty quantification approach by modelling the distribution of features.
We incorporate an efficient ensemble mechanism, namely batch-ensemble, to construct the batch-ensemble neural networks (BE-SNNs) and overcome the feature collapse problem.
We show that BE-SNNs yield superior performance on several OOD benchmarks, such as the Two-Moons dataset, the FashionMNIST vs MNIST dataset, FashionM
arXiv Detail & Related papers (2022-06-26T16:00:22Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Significant Wave Height Prediction based on Wavelet Graph Neural Network [2.8383948890824913]
"Soft computing" approaches, including machine learning and deep learning models, have shown numerous success in recent years.
A Wavelet Graph Neural Network (WGNN) approach is proposed to integrate the advantages of wavelet transform and graph neural network.
Experimental results show that the proposed WGNN approach outperforms other models, including the numerical models, the machine learning models, and several deep learning models.
arXiv Detail & Related papers (2021-07-20T13:34:48Z) - Spatiotemporal convolutional network for time-series prediction and
causal inference [21.895413699349966]
A neural network computing framework, i.N.N., was developed to efficiently and accurately render a multistep-ahead prediction of a time series.
The framework has great potential in practical applications in artificial intelligence (AI) or machine learning fields.
arXiv Detail & Related papers (2021-07-03T06:20:43Z) - Evaluation of deep learning models for multi-step ahead time series
prediction [1.3764085113103222]
We present an evaluation study that compares the performance of deep learning models for multi-step ahead time series prediction.
Our deep learning methods compromise of simple recurrent neural networks, long short term memory (LSTM) networks, bidirectional LSTM, encoder-decoder LSTM networks, and convolutional neural networks.
arXiv Detail & Related papers (2021-03-26T04:07:11Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.