On the use of recurrent neural networks for predictions of turbulent
flows
- URL: http://arxiv.org/abs/2002.01222v1
- Date: Tue, 4 Feb 2020 11:01:43 GMT
- Title: On the use of recurrent neural networks for predictions of turbulent
flows
- Authors: Luca Guastoni, Prem A. Srinivasan, Hossein Azizpour, Philipp Schlatter
and Ricardo Vinuesa
- Abstract summary: It is possible to obtain excellent predictions of the turbulence statistics with properly trained long short-term memory networks.
More sophisticated loss functions, including not only the instantaneous predictions but also the averaged behavior of the flow, may lead to much faster neural network training.
- Score: 1.95992742032823
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, the prediction capabilities of recurrent neural networks are
assessed in the low-order model of near-wall turbulence by Moehlis {\it et al.}
(New J. Phys. {\bf 6}, 56, 2004). Our results show that it is possible to
obtain excellent predictions of the turbulence statistics and the dynamic
behavior of the flow with properly trained long short-term memory (LSTM)
networks, leading to relative errors in the mean and the fluctuations below
$1\%$. We also observe that using a loss function based only on the
instantaneous predictions of the flow may not lead to the best predictions in
terms of turbulence statistics, and it is necessary to define a stopping
criterion based on the computed statistics. Furthermore, more sophisticated
loss functions, including not only the instantaneous predictions but also the
averaged behavior of the flow, may lead to much faster neural network training.
Related papers
- Physics-guided Active Sample Reweighting for Urban Flow Prediction [75.24539704456791]
Urban flow prediction is a nuanced-temporal modeling that estimates the throughput of transportation services like buses, taxis and ride-driven models.
Some recent prediction solutions bring remedies with the notion of physics-guided machine learning (PGML)
We develop a atized physics-guided network (PN), and propose a data-aware framework Physics-guided Active Sample Reweighting (P-GASR)
arXiv Detail & Related papers (2024-07-18T15:44:23Z) - Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Towards Long-Term predictions of Turbulence using Neural Operators [68.8204255655161]
It aims to develop reduced-order/surrogate models for turbulent flow simulations using Machine Learning.
Different model structures are analyzed, with U-NET structures performing better than the standard FNO in accuracy and stability.
arXiv Detail & Related papers (2023-07-25T14:09:53Z) - Learning from Predictions: Fusing Training and Autoregressive Inference
for Long-Term Spatiotemporal Forecasts [4.068387278512612]
We propose the Scheduled Autoregressive BPTT (BPTT-SA) algorithm for predicting complex systems.
Our results show that BPTT-SA effectively reduces iterative error propagation in Convolutional RNNs and Convolutional Autoencoder RNNs.
arXiv Detail & Related papers (2023-02-22T02:46:54Z) - Probabilistic AutoRegressive Neural Networks for Accurate Long-range
Forecasting [6.295157260756792]
We introduce the Probabilistic AutoRegressive Neural Networks (PARNN)
PARNN is capable of handling complex time series data exhibiting non-stationarity, nonlinearity, non-seasonality, long-range dependence, and chaotic patterns.
We evaluate the performance of PARNN against standard statistical, machine learning, and deep learning models, including Transformers, NBeats, and DeepAR.
arXiv Detail & Related papers (2022-04-01T17:57:36Z) - Uncertainty-Aware Time-to-Event Prediction using Deep Kernel Accelerated
Failure Time Models [11.171712535005357]
We propose Deep Kernel Accelerated Failure Time models for the time-to-event prediction task.
Our model shows better point estimate performance than recurrent neural network based baselines in experiments on two real-world datasets.
arXiv Detail & Related papers (2021-07-26T14:55:02Z) - Interpretable Social Anchors for Human Trajectory Forecasting in Crowds [84.20437268671733]
We propose a neural network-based system to predict human trajectory in crowds.
We learn interpretable rule-based intents, and then utilise the expressibility of neural networks to model scene-specific residual.
Our architecture is tested on the interaction-centric benchmark TrajNet++.
arXiv Detail & Related papers (2021-05-07T09:22:34Z) - Uncertainty Intervals for Graph-based Spatio-Temporal Traffic Prediction [0.0]
We propose a Spatio-Temporal neural network that is trained to estimate a density given the measurements of previous timesteps, conditioned on a quantile.
Our method of density estimation is fully parameterised by our neural network and does not use a likelihood approximation internally.
This approach produces uncertainty estimates without the need to sample during inference, such as in Monte Carlo Dropout.
arXiv Detail & Related papers (2020-12-09T18:02:26Z) - Adversarial Refinement Network for Human Motion Prediction [61.50462663314644]
Two popular methods, recurrent neural networks and feed-forward deep networks, are able to predict rough motion trend.
We propose an Adversarial Refinement Network (ARNet) following a simple yet effective coarse-to-fine mechanism with novel adversarial error augmentation.
arXiv Detail & Related papers (2020-11-23T05:42:20Z) - Recurrent neural networks and Koopman-based frameworks for temporal
predictions in a low-order model of turbulence [1.95992742032823]
We show that it is possible to obtain excellent reproductions of the long-term statistics of a chaotic system with properly trained long-short-term memory networks.
A Koopman-based framework, called Koopman with nonlinear forcing (KNF), leads to the same level of accuracy in the statistics at a significantly lower computational expense.
arXiv Detail & Related papers (2020-05-01T11:05:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.