GNRK: Graph Neural Runge-Kutta method for solving partial differential
equations
- URL: http://arxiv.org/abs/2310.00618v1
- Date: Sun, 1 Oct 2023 08:52:46 GMT
- Title: GNRK: Graph Neural Runge-Kutta method for solving partial differential
equations
- Authors: Hoyun Choi, Sungyeop Lee, B. Kahng, Junghyo Jo
- Abstract summary: This study introduces a novel approach called Graph Neural Runge-Kutta (GNRK)
GNRK integrates graph neural network modules with a recurrent structure inspired by the classical solvers.
It demonstrates the capability to address general PDEs, irrespective of initial conditions or PDE coefficients.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks have proven to be efficient surrogate models for tackling
partial differential equations (PDEs). However, their applicability is often
confined to specific PDEs under certain constraints, in contrast to classical
PDE solvers that rely on numerical differentiation. Striking a balance between
efficiency and versatility, this study introduces a novel approach called Graph
Neural Runge-Kutta (GNRK), which integrates graph neural network modules with a
recurrent structure inspired by the classical solvers. The GNRK operates on
graph structures, ensuring its resilience to changes in spatial and temporal
resolutions during domain discretization. Moreover, it demonstrates the
capability to address general PDEs, irrespective of initial conditions or PDE
coefficients. To assess its performance, we benchmark the GNRK against existing
neural network based PDE solvers using the 2-dimensional Burgers' equation,
revealing the GNRK's superiority in terms of model size and accuracy.
Additionally, this graph-based methodology offers a straightforward extension
for solving coupled differential equations, typically necessitating more
intricate models.
Related papers
- First-order PDES for Graph Neural Networks: Advection And Burgers Equation Models [1.4174475093445238]
This paper presents new Graph Neural Network models that incorporate two first-order Partial Differential Equations (PDEs)
Our experimental findings highlight the capacity of our new PDE model to achieve comparable results with higher-order PDE models and fix the over-smoothing problem up to 64 layers.
Results underscore the adaptability and versatility of GNNs, indicating that unconventional approaches can yield outcomes on par with established techniques.
arXiv Detail & Related papers (2024-04-03T21:47:02Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - Autoregressive Renaissance in Neural PDE Solvers [0.0]
A paper published in ICLR 2022 revisits autoregressive models and designs a message passing graph neural network.
This blog post delves into the key contributions of this work, exploring the strategies used to address the common problem of instability in autoregressive models.
arXiv Detail & Related papers (2023-10-30T17:35:26Z) - Neural Delay Differential Equations: System Reconstruction and Image
Classification [14.59919398960571]
We propose a new class of continuous-depth neural networks with delay, named Neural Delay Differential Equations (NDDEs)
Compared to NODEs, NDDEs have a stronger capacity of nonlinear representations.
We achieve lower loss and higher accuracy not only for the data produced synthetically but also for the CIFAR10, a well-known image dataset.
arXiv Detail & Related papers (2023-04-11T16:09:28Z) - Learning Subgrid-scale Models with Neural Ordinary Differential
Equations [0.39160947065896795]
We propose a new approach to learning the subgrid-scale model when simulating partial differential equations (PDEs)
In this approach neural networks are used to learn the coarse- to fine-grid map, which can be viewed as subgrid-scale parameterization.
Our method inherits the advantages of NODEs and can be used to parameterize subgrid scales, approximate coupling operators, and improve the efficiency of low-order solvers.
arXiv Detail & Related papers (2022-12-20T02:45:09Z) - Neural Operator with Regularity Structure for Modeling Dynamics Driven
by SPDEs [70.51212431290611]
Partial differential equations (SPDEs) are significant tools for modeling dynamics in many areas including atmospheric sciences and physics.
We propose the Neural Operator with Regularity Structure (NORS) which incorporates the feature vectors for modeling dynamics driven by SPDEs.
We conduct experiments on various of SPDEs including the dynamic Phi41 model and the 2d Navier-Stokes equation.
arXiv Detail & Related papers (2022-04-13T08:53:41Z) - Score-based Generative Modeling of Graphs via the System of Stochastic
Differential Equations [57.15855198512551]
We propose a novel score-based generative model for graphs with a continuous-time framework.
We show that our method is able to generate molecules that lie close to the training distribution yet do not violate the chemical valency rule.
arXiv Detail & Related papers (2022-02-05T08:21:04Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z) - Neural Delay Differential Equations [9.077775405204347]
We propose a new class of continuous-depth neural networks with delay, named as Neural Delay Differential Equations (NDDEs)
For computing the corresponding gradients, we use the adjoint sensitivity method to obtain the delayed dynamics of the adjoint.
Our results reveal that appropriately articulating the elements of dynamical systems into the network design is truly beneficial to promoting the network performance.
arXiv Detail & Related papers (2021-02-22T06:53:51Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.