Learning Subgrid-scale Models with Neural Ordinary Differential
Equations
- URL: http://arxiv.org/abs/2212.09967v3
- Date: Wed, 12 Apr 2023 19:25:43 GMT
- Title: Learning Subgrid-scale Models with Neural Ordinary Differential
Equations
- Authors: Shinhoo Kang, Emil M. Constantinescu
- Abstract summary: We propose a new approach to learning the subgrid-scale model when simulating partial differential equations (PDEs)
In this approach neural networks are used to learn the coarse- to fine-grid map, which can be viewed as subgrid-scale parameterization.
Our method inherits the advantages of NODEs and can be used to parameterize subgrid scales, approximate coupling operators, and improve the efficiency of low-order solvers.
- Score: 0.39160947065896795
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new approach to learning the subgrid-scale model when simulating
partial differential equations (PDEs) solved by the method of lines and their
representation in chaotic ordinary differential equations, based on neural
ordinary differential equations (NODEs). Solving systems with fine temporal and
spatial grid scales is an ongoing computational challenge, and closure models
are generally difficult to tune. Machine learning approaches have increased the
accuracy and efficiency of computational fluid dynamics solvers. In this
approach neural networks are used to learn the coarse- to fine-grid map, which
can be viewed as subgrid-scale parameterization. We propose a strategy that
uses the NODE and partial knowledge to learn the source dynamics at a
continuous level. Our method inherits the advantages of NODEs and can be used
to parameterize subgrid scales, approximate coupling operators, and improve the
efficiency of low-order solvers. Numerical results with the two-scale Lorenz 96
ODE, the convection-diffusion PDE, and the viscous Burgers' PDE are used to
illustrate this approach.
Related papers
- Solving Poisson Equations using Neural Walk-on-Spheres [80.1675792181381]
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations.
We demonstrate the superiority of NWoS in accuracy, speed, and computational costs.
arXiv Detail & Related papers (2024-06-05T17:59:22Z) - Enhancing Low-Order Discontinuous Galerkin Methods with Neural Ordinary
Differential Equations for Compressible Navier--Stokes Equations [0.18648070031379424]
It is common to run a low-fidelity model with a subgrid-scale model to reduce the computational cost.
We propose a novel method for learning the subgrid-scale model effects when simulating partial differential equations augmented by neural ordinary differential operators.
Our approach learns the missing scales of the low-order DG solver at a continuous level and hence improves the accuracy of the low-order DG approximations.
arXiv Detail & Related papers (2023-10-29T04:26:23Z) - GNRK: Graph Neural Runge-Kutta method for solving partial differential
equations [0.0]
This study introduces a novel approach called Graph Neural Runge-Kutta (GNRK)
GNRK integrates graph neural network modules with a recurrent structure inspired by the classical solvers.
It demonstrates the capability to address general PDEs, irrespective of initial conditions or PDE coefficients.
arXiv Detail & Related papers (2023-10-01T08:52:46Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Cogradient Descent for Dependable Learning [64.02052988844301]
We propose a dependable learning based on Cogradient Descent (CoGD) algorithm to address the bilinear optimization problem.
CoGD is introduced to solve bilinear problems when one variable is with sparsity constraint.
It can also be used to decompose the association of features and weights, which further generalizes our method to better train convolutional neural networks (CNNs)
arXiv Detail & Related papers (2021-06-20T04:28:20Z) - Learning stochastic dynamical systems with neural networks mimicking the
Euler-Maruyama scheme [14.436723124352817]
We propose a data driven approach where parameters of the SDE are represented by a neural network with a built-in SDE integration scheme.
The algorithm is applied to the geometric brownian motion and a version of the Lorenz-63 model.
arXiv Detail & Related papers (2021-05-18T11:41:34Z) - Learning optimal multigrid smoothers via neural networks [1.9336815376402723]
We propose an efficient framework for learning optimized smoothers from operator stencils in the form of convolutional neural networks (CNNs)
CNNs are trained on small-scale problems from a given type of PDEs based on a supervised loss function derived from multigrid convergence theories.
Numerical results on anisotropic rotated Laplacian problems demonstrate improved convergence rates and solution time compared with classical hand-crafted relaxation methods.
arXiv Detail & Related papers (2021-02-24T05:02:54Z) - Actor-Critic Algorithm for High-dimensional Partial Differential
Equations [1.5644600570264835]
We develop a deep learning model to solve high-dimensional nonlinear parabolic partial differential equations.
The Markovian property of the BSDE is utilized in designing our neural network architecture.
We demonstrate those improvements by solving a few well-known classes of PDEs.
arXiv Detail & Related papers (2020-10-07T20:53:24Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.