Accuracy of neural networks for the simulation of chaotic dynamics:
precision of training data vs precision of the algorithm
- URL: http://arxiv.org/abs/2008.04222v2
- Date: Fri, 6 Nov 2020 16:03:25 GMT
- Title: Accuracy of neural networks for the simulation of chaotic dynamics:
precision of training data vs precision of the algorithm
- Authors: S. Bompas, B. Georgeot and D. Gu\'ery-Odelin
- Abstract summary: We simulate the Lorenz system with different precisions using three different neural network techniques adapted to time series.
Our results show that the ESN network is better at predicting accurately the dynamics of the system.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore the influence of precision of the data and the algorithm for the
simulation of chaotic dynamics by neural networks techniques. For this purpose,
we simulate the Lorenz system with different precisions using three different
neural network techniques adapted to time series, namely reservoir computing
(using ESN), LSTM and TCN, for both short and long time predictions, and assess
their efficiency and accuracy. Our results show that the ESN network is better
at predicting accurately the dynamics of the system, and that in all cases the
precision of the algorithm is more important than the precision of the training
data for the accuracy of the predictions. This result gives support to the idea
that neural networks can perform time-series predictions in many practical
applications for which data are necessarily of limited precision, in line with
recent results. It also suggests that for a given set of data the reliability
of the predictions can be significantly improved by using a network with higher
precision than the one of the data.
Related papers
- An Investigation on Machine Learning Predictive Accuracy Improvement and Uncertainty Reduction using VAE-based Data Augmentation [2.517043342442487]
Deep generative learning uses certain ML models to learn the underlying distribution of existing data and generate synthetic samples that resemble the real data.
In this study, our objective is to evaluate the effectiveness of data augmentation using variational autoencoder (VAE)-based deep generative models.
We investigated whether the data augmentation leads to improved accuracy in the predictions of a deep neural network (DNN) model trained using the augmented data.
arXiv Detail & Related papers (2024-10-24T18:15:48Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Guaranteed Approximation Bounds for Mixed-Precision Neural Operators [83.64404557466528]
We build on intuition that neural operator learning inherently induces an approximation error.
We show that our approach reduces GPU memory usage by up to 50% and improves throughput by 58% with little or no reduction in accuracy.
arXiv Detail & Related papers (2023-07-27T17:42:06Z) - Deep Learning for Day Forecasts from Sparse Observations [60.041805328514876]
Deep neural networks offer an alternative paradigm for modeling weather conditions.
MetNet-3 learns from both dense and sparse data sensors and makes predictions up to 24 hours ahead for precipitation, wind, temperature and dew point.
MetNet-3 has a high temporal and spatial resolution, respectively, up to 2 minutes and 1 km as well as a low operational latency.
arXiv Detail & Related papers (2023-06-06T07:07:54Z) - Confidence-Nets: A Step Towards better Prediction Intervals for
regression Neural Networks on small datasets [0.0]
We propose an ensemble method that attempts to estimate the uncertainty of predictions, increase their accuracy and provide an interval for the expected variation.
The proposed method is tested on various datasets, and a significant improvement in the performance of the neural network model is seen.
arXiv Detail & Related papers (2022-10-31T06:38:40Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - Predictive coding, precision and natural gradients [2.1601966913620325]
We show that hierarchical predictive coding networks with learnable precision are able to solve various supervised and unsupervised learning tasks.
When applied to unsupervised auto-encoding of image inputs, the deterministic network produces hierarchically organized and disentangled embeddings.
arXiv Detail & Related papers (2021-11-12T21:05:03Z) - A computationally efficient neural network for predicting weather
forecast probabilities [0.0]
We take the novel approach of using a neural network to predict probability density functions rather than a single output value.
This enables the calculation of both uncertainty and skill metrics for the neural network predictions.
This approach is purely data-driven and the neural network is trained on the WeatherBench dataset.
arXiv Detail & Related papers (2021-03-26T12:28:15Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - The training accuracy of two-layer neural networks: its estimation and
understanding using random datasets [0.0]
We propose a novel theory based on space partitioning to estimate the approximate training accuracy for two-layer neural networks on random datasets without training.
Our method estimates the training accuracy for two-layer fully-connected neural networks on two-class random datasets using only three arguments.
arXiv Detail & Related papers (2020-10-26T07:21:29Z) - Understanding the Effects of Data Parallelism and Sparsity on Neural
Network Training [126.49572353148262]
We study two factors in neural network training: data parallelism and sparsity.
Despite their promising benefits, understanding of their effects on neural network training remains elusive.
arXiv Detail & Related papers (2020-03-25T10:49:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.