Temporal Subsampling Diminishes Small Spatial Scales in Recurrent Neural
Network Emulators of Geophysical Turbulence
- URL: http://arxiv.org/abs/2305.00100v2
- Date: Thu, 21 Sep 2023 19:03:12 GMT
- Title: Temporal Subsampling Diminishes Small Spatial Scales in Recurrent Neural
Network Emulators of Geophysical Turbulence
- Authors: Timothy A. Smith, Stephen G. Penny, Jason A. Platt, Tse-Chun Chen
- Abstract summary: We investigate how an often overlooked processing step affects the quality of an emulator's predictions.
We implement ML architectures from a class of methods called reservoir computing: (1) a form of spatial Vector Autoregression (N VAR), and (2) an Echo State Network (ESN)
In all cases, subsampling the training data consistently leads to an increased bias at small scales that resembles numerical diffusion.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The immense computational cost of traditional numerical weather and climate
models has sparked the development of machine learning (ML) based emulators.
Because ML methods benefit from long records of training data, it is common to
use datasets that are temporally subsampled relative to the time steps required
for the numerical integration of differential equations. Here, we investigate
how this often overlooked processing step affects the quality of an emulator's
predictions. We implement two ML architectures from a class of methods called
reservoir computing: (1) a form of Nonlinear Vector Autoregression (NVAR), and
(2) an Echo State Network (ESN). Despite their simplicity, it is well
documented that these architectures excel at predicting low dimensional chaotic
dynamics. We are therefore motivated to test these architectures in an
idealized setting of predicting high dimensional geophysical turbulence as
represented by Surface Quasi-Geostrophic dynamics. In all cases, subsampling
the training data consistently leads to an increased bias at small spatial
scales that resembles numerical diffusion. Interestingly, the NVAR architecture
becomes unstable when the temporal resolution is increased, indicating that the
polynomial based interactions are insufficient at capturing the detailed
nonlinearities of the turbulent flow. The ESN architecture is found to be more
robust, suggesting a benefit to the more expensive but more general structure.
Spectral errors are reduced by including a penalty on the kinetic energy
density spectrum during training, although the subsampling related errors
persist. Future work is warranted to understand how the temporal resolution of
training data affects other ML architectures.
Related papers
- Data Scoping: Effectively Learning the Evolution of Generic Transport PDEs [0.0]
Transport PDEs are governed by time-dependent partial differential equations (PDEs) describing mass, momentum, and energy conservation.
Deep learning architectures are fundamentally incompatible with the simulation of these PDEs.
This paper proposes a distributed data scoping method with linear time complexity to limit the scope of information to predict the local properties.
arXiv Detail & Related papers (2024-05-02T14:24:56Z) - Reduced-order modeling of unsteady fluid flow using neural network ensembles [0.0]
We propose using bagging, a commonly used ensemble learning technique, to develop a fully data-driven reduced-order model framework.
The framework uses CAEs for spatial reconstruction of the full-order model and LSTM ensembles for time-series prediction.
Results show that the presented framework effectively reduces error propagation and leads to more accurate time-series prediction of latent variables at unseen points.
arXiv Detail & Related papers (2024-02-08T03:02:59Z) - Learning Controllable Adaptive Simulation for Multi-resolution Physics [86.8993558124143]
We introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model.
LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNN-based actor-critic for learning the policy of spatial refinement and coarsening.
We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error.
arXiv Detail & Related papers (2023-05-01T23:20:27Z) - A predictive physics-aware hybrid reduced order model for reacting flows [65.73506571113623]
A new hybrid predictive Reduced Order Model (ROM) is proposed to solve reacting flow problems.
The number of degrees of freedom is reduced from thousands of temporal points to a few POD modes with their corresponding temporal coefficients.
Two different deep learning architectures have been tested to predict the temporal coefficients.
arXiv Detail & Related papers (2023-01-24T08:39:20Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Deep Surrogate for Direct Time Fluid Dynamics [44.62475518267084]
Graph Neural Networks (GNN) can address the specificity of the irregular meshes commonly used in CFD simulations.
We present our ongoing work to design a novel direct time GNN architecture for irregular meshes.
arXiv Detail & Related papers (2021-12-16T10:08:20Z) - Emulating Spatio-Temporal Realizations of Three-Dimensional Isotropic
Turbulence via Deep Sequence Learning Models [24.025975236316842]
We use a data-driven approach to model a three-dimensional turbulent flow using cutting-edge Deep Learning techniques.
The accuracy of the model is assessed using statistical and physics-based metrics.
arXiv Detail & Related papers (2021-12-07T03:33:39Z) - Echo State Network for two-dimensional turbulent moist Rayleigh-B\'enard
convection [0.0]
We apply an echo state network to approximate the evolution of moist Rayleigh-B'enard convection.
We conclude that our model is capable of learning complex dynamics.
arXiv Detail & Related papers (2021-01-27T11:27:16Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Spatio-Temporal Graph Scattering Transform [54.52797775999124]
Graph neural networks may be impractical in some real-world scenarios due to a lack of sufficient high-quality training data.
We put forth a novel mathematically designed framework to analyze-temporal data.
arXiv Detail & Related papers (2020-12-06T19:49:55Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.