Model-assisted deep learning of rare extreme events from partial
observations
- URL: http://arxiv.org/abs/2111.04857v1
- Date: Thu, 4 Nov 2021 23:24:22 GMT
- Title: Model-assisted deep learning of rare extreme events from partial
observations
- Authors: Anna Asch and Ethan Brady and Hugo Gallardo and John Hood and Bryan
Chu and Mohammad Farazmand
- Abstract summary: To predict rare extreme events using deep neural networks, one encounters the so-called small data problem.
Here, we investigate a model-assisted framework where the training data is obtained from numerical simulations.
We find that long short-term memory networks are most robust to noise and to yield relatively accurate predictions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: To predict rare extreme events using deep neural networks, one encounters the
so-called small data problem because even long-term observations often contain
few extreme events. Here, we investigate a model-assisted framework where the
training data is obtained from numerical simulations, as opposed to
observations, with adequate samples from extreme events. However, to ensure the
trained networks are applicable in practice, the training is not performed on
the full simulation data; instead we only use a small subset of observable
quantities which can be measured in practice. We investigate the feasibility of
this model-assisted framework on three different dynamical systems (Rossler
attractor, FitzHugh--Nagumo model, and a turbulent fluid flow) and three
different deep neural network architectures (feedforward, long short-term
memory, and reservoir computing). In each case, we study the prediction
accuracy, robustness to noise, reproducibility under repeated training, and
sensitivity to the type of input data. In particular, we find long short-term
memory networks to be most robust to noise and to yield relatively accurate
predictions, while requiring minimal fine-tuning of the hyperparameters.
Related papers
- Just How Flexible are Neural Networks in Practice? [89.80474583606242]
It is widely believed that a neural network can fit a training set containing at least as many samples as it has parameters.
In practice, however, we only find solutions via our training procedure, including the gradient and regularizers, limiting flexibility.
arXiv Detail & Related papers (2024-06-17T12:24:45Z) - A Neural-Network-Based Approach for Loose-Fitting Clothing [2.910739621411222]
We show how to approximate dynamic modes in loose-fitting clothing using a real-time numerical algorithm.
We also use skinning to reconstruct a rough approximation to a desirable mesh.
In contrast to recurrent neural networks that require a plethora of training data, QNNs perform well with significantly less training data.
arXiv Detail & Related papers (2024-04-25T05:52:20Z) - Deep Learning for Day Forecasts from Sparse Observations [60.041805328514876]
Deep neural networks offer an alternative paradigm for modeling weather conditions.
MetNet-3 learns from both dense and sparse data sensors and makes predictions up to 24 hours ahead for precipitation, wind, temperature and dew point.
MetNet-3 has a high temporal and spatial resolution, respectively, up to 2 minutes and 1 km as well as a low operational latency.
arXiv Detail & Related papers (2023-06-06T07:07:54Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - Reconstructing Training Data from Model Gradient, Provably [68.21082086264555]
We reconstruct the training samples from a single gradient query at a randomly chosen parameter value.
As a provable attack that reveals sensitive training data, our findings suggest potential severe threats to privacy.
arXiv Detail & Related papers (2022-12-07T15:32:22Z) - DeepBayes -- an estimator for parameter estimation in stochastic
nonlinear dynamical models [11.917949887615567]
We propose DeepBayes estimators that leverage the power of deep recurrent neural networks in learning an estimator.
The deep recurrent neural network architectures can be trained offline and ensure significant time savings during inference.
We demonstrate the applicability of our proposed method on different example models and perform detailed comparisons with state-of-the-art approaches.
arXiv Detail & Related papers (2022-05-04T18:12:17Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Multivariate Anomaly Detection based on Prediction Intervals Constructed
using Deep Learning [0.0]
We benchmark our approach against the oft-preferred well-established statistical models.
We focus on three deep learning architectures, namely, cascaded neural networks, reservoir computing and long short-term memory recurrent neural networks.
arXiv Detail & Related papers (2021-10-07T12:34:31Z) - Model-free prediction of emergence of extreme events in a parametrically
driven nonlinear dynamical system by Deep Learning [0.0]
We predict the emergence of extreme events in a parametrically driven nonlinear dynamical system.
We use three Deep Learning models, namely Multi-Layer Perceptron, Convolutional Neural Network and Long Short-Term Memory.
We find that the Long Short-Term Memory model can serve as the best model to forecast the chaotic time series.
arXiv Detail & Related papers (2021-07-14T14:48:57Z) - Improved Predictive Deep Temporal Neural Networks with Trend Filtering [22.352437268596674]
We propose a new prediction framework based on deep neural networks and a trend filtering.
We reveal that the predictive performance of deep temporal neural networks improves when the training data is temporally processed by a trend filtering.
arXiv Detail & Related papers (2020-10-16T08:29:36Z) - A Multi-Channel Neural Graphical Event Model with Negative Evidence [76.51278722190607]
Event datasets are sequences of events of various types occurring irregularly over the time-line.
We propose a non-parametric deep neural network approach in order to estimate the underlying intensity functions.
arXiv Detail & Related papers (2020-02-21T23:10:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.