Learning Spatiotemporal Chaos Using Next-Generation Reservoir Computing
- URL: http://arxiv.org/abs/2203.13294v1
- Date: Thu, 24 Mar 2022 18:42:12 GMT
- Title: Learning Spatiotemporal Chaos Using Next-Generation Reservoir Computing
- Authors: Wendson A. S. Barbosa and Daniel J. Gauthier
- Abstract summary: We show that an ML architecture combined with a next-generation chaos reservoir computer displays state-of-the-art performance with a training time $103-10$4 times faster.
We also take advantage of the translational symmetry of the model to further reduce the computational cost and training data, each by a factor of $sim$10.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Forecasting the behavior of high-dimensional dynamical systems using machine
learning (ML) requires efficient methods to learn the underlying physical
model. We demonstrate spatiotemporal chaos prediction of a heuristic
atmospheric weather model using an ML architecture that, when combined with a
next-generation reservoir computer, displays state-of-the-art performance with
a training time $10^3-10^4$ times faster and training data set $\sim 10^2$
times smaller than other ML algorithms. We also take advantage of the
translational symmetry of the model to further reduce the computational cost
and training data, each by a factor of $\sim$10.
Related papers
- Graph Convolutional Neural Networks as Surrogate Models for Climate Simulation [0.1884913108327873]
We leverage fully-connected neural networks (FCNNs) and graph convolutional neural networks (GCNNs) to enable rapid simulation and uncertainty quantification.
Our surrogate simulated 80 years in approximately 310 seconds on a single A100 GPU, compared to weeks for the ESM model.
arXiv Detail & Related papers (2024-09-19T14:41:15Z) - Bridging the Sim-to-Real Gap with Bayesian Inference [53.61496586090384]
We present SIM-FSVGD for learning robot dynamics from data.
We use low-fidelity physical priors to regularize the training of neural network models.
We demonstrate the effectiveness of SIM-FSVGD in bridging the sim-to-real gap on a high-performance RC racecar system.
arXiv Detail & Related papers (2024-03-25T11:29:32Z) - A Dynamical Model of Neural Scaling Laws [79.59705237659547]
We analyze a random feature model trained with gradient descent as a solvable model of network training and generalization.
Our theory shows how the gap between training and test loss can gradually build up over time due to repeated reuse of data.
arXiv Detail & Related papers (2024-02-02T01:41:38Z) - Differentiable Turbulence II [0.0]
We develop a framework for integrating deep learning models into a generic finite element numerical scheme for solving the Navier-Stokes equations.
We show that the learned closure can achieve accuracy comparable to traditional large eddy simulation on a finer grid that amounts to an equivalent speedup of 10x.
arXiv Detail & Related papers (2023-07-25T14:27:49Z) - In Situ Framework for Coupling Simulation and Machine Learning with
Application to CFD [51.04126395480625]
Recent years have seen many successful applications of machine learning (ML) to facilitate fluid dynamic computations.
As simulations grow, generating new training datasets for traditional offline learning creates I/O and storage bottlenecks.
This work offers a solution by simplifying this coupling and enabling in situ training and inference on heterogeneous clusters.
arXiv Detail & Related papers (2023-06-22T14:07:54Z) - Learning Controllable Adaptive Simulation for Multi-resolution Physics [86.8993558124143]
We introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model.
LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNN-based actor-critic for learning the policy of spatial refinement and coarsening.
We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error.
arXiv Detail & Related papers (2023-05-01T23:20:27Z) - $\beta$-Variational autoencoders and transformers for reduced-order
modelling of fluid flows [0.3644907558168858]
Variational autoencoder (VAE) architectures have the potential to develop reduced-order models (ROMs) for chaotic fluid flows.
We propose a method for learning compact and near-orthogonal ROMs using a combination of a $beta$-VAE and a transformer.
arXiv Detail & Related papers (2023-04-07T10:11:32Z) - Continual learning autoencoder training for a particle-in-cell
simulation via streaming [52.77024349608834]
upcoming exascale era will provide a new generation of physics simulations with high resolution.
These simulations will have a high resolution, which will impact the training of machine learning models since storing a high amount of simulation data on disk is nearly impossible.
This work presents an approach that trains a neural network concurrently to a running simulation without data on a disk.
arXiv Detail & Related papers (2022-11-09T09:55:14Z) - Learning unseen coexisting attractors [0.0]
Reservoir computing is a machine learning approach that can generate a surrogate model of a dynamical system.
Here, we study a challenging problem of learning a dynamical system that has both disparate time scales and multiple co-existing dynamical states (attractors)
We show that the next-generation reservoir computing approach uses $sim 1.7 times$ less training data, requires $103 times$ shorter warm up' time, and has an $sim 100times$ higher accuracy in predicting the co-existing attractor characteristics.
arXiv Detail & Related papers (2022-07-28T14:55:14Z) - Learning Large-scale Subsurface Simulations with a Hybrid Graph Network
Simulator [57.57321628587564]
We introduce Hybrid Graph Network Simulator (HGNS) for learning reservoir simulations of 3D subsurface fluid flows.
HGNS consists of a subsurface graph neural network (SGNN) to model the evolution of fluid flows, and a 3D-U-Net to model the evolution of pressure.
Using an industry-standard subsurface flow dataset (SPE-10) with 1.1 million cells, we demonstrate that HGNS is able to reduce the inference time up to 18 times compared to standard subsurface simulators.
arXiv Detail & Related papers (2022-06-15T17:29:57Z) - A Taylor Based Sampling Scheme for Machine Learning in Computational
Physics [0.0]
We take advantage of the ability to generate data using numerical simulations programs to train Machine Learning models better.
We elaborate a new data sampling scheme based on Taylor approximation to reduce the error of a Deep Neural Network (DNN) when learning the solution of an ordinary differential equations (ODE) system.
arXiv Detail & Related papers (2021-01-20T12:56:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.