Neural Closure Models for Dynamical Systems
- URL: http://arxiv.org/abs/2012.13869v1
- Date: Sun, 27 Dec 2020 05:55:33 GMT
- Title: Neural Closure Models for Dynamical Systems
- Authors: Abhinav Gupta and Pierre F.J. Lermusiaux
- Abstract summary: We develop a novel methodology to learn non-Markovian closure parameterizations for low-fidelity models.
New "neural closure models" augment low-fidelity models with neural delay differential equations (nDDEs)
We show that using non-Markovian over Markovian closures improves long-term accuracy and requires smaller networks.
- Score: 35.000303827255024
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Complex dynamical systems are used for predictions in many applications.
Because of computational costs, models are however often truncated, coarsened,
or aggregated. As the neglected and unresolved terms along with their
interactions with the resolved ones become important, the usefulness of model
predictions diminishes. We develop a novel, versatile, and rigorous methodology
to learn non-Markovian closure parameterizations for low-fidelity models using
data from high-fidelity simulations. The new "neural closure models" augment
low-fidelity models with neural delay differential equations (nDDEs), motivated
by the Mori-Zwanzig formulation and the inherent delays in natural dynamical
systems. We demonstrate that neural closures efficiently account for truncated
modes in reduced-order-models, capture the effects of subgrid-scale processes
in coarse models, and augment the simplification of complex biochemical models.
We show that using non-Markovian over Markovian closures improves long-term
accuracy and requires smaller networks. We provide adjoint equation derivations
and network architectures needed to efficiently implement the new discrete and
distributed nDDEs. The performance of discrete over distributed delays in
closure models is explained using information theory, and we observe an optimal
amount of past information for a specified architecture. Finally, we analyze
computational complexity and explain the limited additional cost due to neural
closure models.
Related papers
- Data-Driven Stochastic Closure Modeling via Conditional Diffusion Model and Neural Operator [0.0]
Closure models are widely used in simulating complex multiscale dynamical systems such as turbulence and the earth system.
For systems without a clear scale, generalization deterministic and local closure models often lack enough capability.
We propose a datadriven modeling framework for constructing neural operator and non-local closure models.
arXiv Detail & Related papers (2024-08-06T05:21:31Z) - CGNSDE: Conditional Gaussian Neural Stochastic Differential Equation for Modeling Complex Systems and Data Assimilation [1.4322470793889193]
A new knowledge-based and machine learning hybrid modeling approach, called conditional neural differential equation (CGNSDE), is developed.
In contrast to the standard neural network predictive models, the CGNSDE is designed to effectively tackle both forward prediction tasks and inverse state estimation problems.
arXiv Detail & Related papers (2024-04-10T05:32:03Z) - Multi-fidelity reduced-order surrogate modeling [5.346062841242067]
We present a new data-driven strategy that combines dimensionality reduction with multi-fidelity neural network surrogates.
We show that the onset of instabilities and transients are well captured by this surrogate technique.
arXiv Detail & Related papers (2023-09-01T08:16:53Z) - Generalized Neural Closure Models with Interpretability [28.269731698116257]
We develop a novel and versatile methodology of unified neural partial delay differential equations.
We augment existing/low-fidelity dynamical models directly in their partial differential equation (PDE) forms with both Markovian and non-Markovian neural network (NN) closure parameterizations.
We demonstrate the new generalized neural closure models (gnCMs) framework using four sets of experiments based on advecting nonlinear waves, shocks, and ocean acidification models.
arXiv Detail & Related papers (2023-01-15T21:57:43Z) - On the Generalization and Adaption Performance of Causal Models [99.64022680811281]
Differentiable causal discovery has proposed to factorize the data generating process into a set of modules.
We study the generalization and adaption performance of such modular neural causal models.
Our analysis shows that the modular neural causal models outperform other models on both zero and few-shot adaptation in low data regimes.
arXiv Detail & Related papers (2022-06-09T17:12:32Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - Hybrid modeling: Applications in real-time diagnosis [64.5040763067757]
We outline a novel hybrid modeling approach that combines machine learning inspired models and physics-based models.
We are using such models for real-time diagnosis applications.
arXiv Detail & Related papers (2020-03-04T00:44:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.