On the under-reaching phenomenon in message-passing neural PDE solvers: revisiting the CFL condition
- URL: http://arxiv.org/abs/2507.08861v1
- Date: Wed, 09 Jul 2025 11:50:10 GMT
- Title: On the under-reaching phenomenon in message-passing neural PDE solvers: revisiting the CFL condition
- Authors: Lucas Tesan, Mikel M. Iparraguirre, David Gonzalez, Pedro Martins, Elias Cueto,
- Abstract summary: This paper proposes sharp lower bounds for the number of message passing iterations required in graph neural networks (GNNs) when solving partial differential equations (PDE)<n>We investigate the relationship between the physical constants of the equations governing the problem, the spatial and temporal discretisation and the message passing mechanisms in GNNs.<n>When the suggested lower bound is satisfied, the GNN parameterisation allows the model to accurately capture the underlying phenomenology, resulting in solvers of adequate accuracy.
- Score: 0.4019533549688539
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper proposes sharp lower bounds for the number of message passing iterations required in graph neural networks (GNNs) when solving partial differential equations (PDE). This significantly reduces the need for exhaustive hyperparameter tuning. Bounds are derived for the three fundamental classes of PDEs (hyperbolic, parabolic and elliptic) by relating the physical characteristics of the problem in question to the message-passing requirement of GNNs. In particular, we investigate the relationship between the physical constants of the equations governing the problem, the spatial and temporal discretisation and the message passing mechanisms in GNNs. When the number of message passing iterations is below these proposed limits, information does not propagate efficiently through the network, resulting in poor solutions, even for deep GNN architectures. In contrast, when the suggested lower bound is satisfied, the GNN parameterisation allows the model to accurately capture the underlying phenomenology, resulting in solvers of adequate accuracy. Examples are provided for four different examples of equations that show the sharpness of the proposed lower bounds.
Related papers
- General-Kindred Physics-Informed Neural Network to the Solutions of Singularly Perturbed Differential Equations [11.121415128908566]
We propose the General-Kindred Physics-Informed Neural Network (GKPINN) for solving Singular Perturbation Differential Equations (SPDEs)
This approach utilizes prior knowledge of the boundary layer from the equation and establishes a novel network to assist PINN in approxing the boundary layer.
The research findings underscore the exceptional performance of our novel approach, GKPINN, which delivers a remarkable enhancement in reducing the $L$ error by two to four orders of magnitude compared to the established PINN methodology.
arXiv Detail & Related papers (2024-08-27T02:03:22Z) - Stacked tensorial neural networks for reduced-order modeling of a
parametric partial differential equation [0.0]
I describe a deep neural network architecture that fuses multiple TNNs into a larger network.
I evaluate this architecture on a parametric PDE with three independent variables and three parameters.
arXiv Detail & Related papers (2023-12-21T21:44:50Z) - Grad-Shafranov equilibria via data-free physics informed neural networks [0.0]
We show that PINNs can accurately and effectively solve the Grad-Shafranov equation with several different boundary conditions.
We introduce a parameterized PINN framework, expanding the input space to include variables such as pressure, aspect ratio, elongation, and triangularity.
arXiv Detail & Related papers (2023-11-22T16:08:38Z) - Number Theoretic Accelerated Learning of Physics-Informed Neural Networks [16.57441317977376]
We introduce lattice training and periodization tricks, which ensure the conditions required by the theory.<n>Experiments demonstrate that GLT requires 2-7 times fewer collocation points, resulting in lower computational cost.
arXiv Detail & Related papers (2023-07-26T00:01:21Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - Learning Low Dimensional State Spaces with Overparameterized Recurrent
Neural Nets [57.06026574261203]
We provide theoretical evidence for learning low-dimensional state spaces, which can also model long-term memory.
Experiments corroborate our theory, demonstrating extrapolation via learning low-dimensional state spaces with both linear and non-linear RNNs.
arXiv Detail & Related papers (2022-10-25T14:45:15Z) - LordNet: An Efficient Neural Network for Learning to Solve Parametric Partial Differential Equations without Simulated Data [47.49194807524502]
We propose LordNet, a tunable and efficient neural network for modeling entanglements.
The experiments on solving Poisson's equation and (2D and 3D) Navier-Stokes equation demonstrate that the long-range entanglements can be well modeled by the LordNet.
arXiv Detail & Related papers (2022-06-19T14:41:08Z) - Learning to Solve PDE-constrained Inverse Problems with Graph Networks [51.89325993156204]
In many application domains across science and engineering, we are interested in solving inverse problems with constraints defined by a partial differential equation (PDE)
Here we explore GNNs to solve such PDE-constrained inverse problems.
We demonstrate computational speedups of up to 90x using GNNs compared to principled solvers.
arXiv Detail & Related papers (2022-06-01T18:48:01Z) - Lie Point Symmetry Data Augmentation for Neural PDE Solvers [69.72427135610106]
We present a method, which can partially alleviate this problem, by improving neural PDE solver sample complexity.
In the context of PDEs, it turns out that we are able to quantitatively derive an exhaustive list of data transformations.
We show how it can easily be deployed to improve neural PDE solver sample complexity by an order of magnitude.
arXiv Detail & Related papers (2022-02-15T18:43:17Z) - Robust Learning of Physics Informed Neural Networks [2.86989372262348]
Physics-informed Neural Networks (PINNs) have been shown to be effective in solving partial differential equations.
This paper shows that a PINN can be sensitive to errors in training data and overfit itself in dynamically propagating these errors over the domain of the solution of the PDE.
arXiv Detail & Related papers (2021-10-26T00:10:57Z) - PhyCRNet: Physics-informed Convolutional-Recurrent Network for Solving
Spatiotemporal PDEs [8.220908558735884]
Partial differential equations (PDEs) play a fundamental role in modeling and simulating problems across a wide range of disciplines.
Recent advances in deep learning have shown the great potential of physics-informed neural networks (NNs) to solve PDEs as a basis for data-driven inverse analysis.
We propose the novel physics-informed convolutional-recurrent learning architectures (PhyCRNet and PhCRyNet-s) for solving PDEs without any labeled data.
arXiv Detail & Related papers (2021-06-26T22:22:19Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.