Graph Neural PDE Solvers with Conservation and Similarity-Equivariance
- URL: http://arxiv.org/abs/2405.16183v1
- Date: Sat, 25 May 2024 11:18:27 GMT
- Title: Graph Neural PDE Solvers with Conservation and Similarity-Equivariance
- Authors: Masanobu Horie, Naoto Mitsume,
- Abstract summary: This study introduces a novel machine-learning architecture that is highly generalizable and adheres to conservation laws and physical symmetries.
The foundation of this architecture is graph neural networks (GNNs), which are adept at accommodating a variety of shapes and forms.
- Score: 6.077284832583712
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Utilizing machine learning to address partial differential equations (PDEs) presents significant challenges due to the diversity of spatial domains and their corresponding state configurations, which complicates the task of encompassing all potential scenarios through data-driven methodologies alone. Moreover, there are legitimate concerns regarding the generalization and reliability of such approaches, as they often overlook inherent physical constraints. In response to these challenges, this study introduces a novel machine-learning architecture that is highly generalizable and adheres to conservation laws and physical symmetries, thereby ensuring greater reliability. The foundation of this architecture is graph neural networks (GNNs), which are adept at accommodating a variety of shapes and forms. Additionally, we explore the parallels between GNNs and traditional numerical solvers, facilitating a seamless integration of conservative principles and symmetries into machine learning models. Our findings from experiments demonstrate that the model's inclusion of physical laws significantly enhances its generalizability, i.e., no significant accuracy degradation for unseen spatial domains while other models degrade. The code is available at https://github.com/yellowshippo/fluxgnn-icml2024.
Related papers
- Adversarial Learning for Neural PDE Solvers with Sparse Data [4.226449585713182]
This study introduces a universal learning strategy for neural network PDEs, named Systematic Model Augmentation for Robust Training.
By focusing on challenging and improving the model's weaknesses, SMART reduces generalization error during training under data-scarce conditions.
arXiv Detail & Related papers (2024-09-04T04:18:25Z) - Understanding the differences in Foundation Models: Attention, State Space Models, and Recurrent Neural Networks [50.29356570858905]
We introduce the Dynamical Systems Framework (DSF), which allows a principled investigation of all these architectures in a common representation.
We provide principled comparisons between softmax attention and other model classes, discussing the theoretical conditions under which softmax attention can be approximated.
This shows the DSF's potential to guide the systematic development of future more efficient and scalable foundation models.
arXiv Detail & Related papers (2024-05-24T17:19:57Z) - Physics-Informed Boundary Integral Networks (PIBI-Nets): A Data-Driven Approach for Solving Partial Differential Equations [1.6435014180036467]
Partial differential equations (PDEs) are widely used to describe relevant phenomena in dynamical systems.
In high-dimensional settings, PINNs often suffer from computational problems because they require dense collocation points over the entire computational domain.
We present Physics-Informed Boundary Networks (PIBI-Nets) as a data-driven approach for solving PDEs in one dimension less than the original problem space.
arXiv Detail & Related papers (2023-08-18T14:03:34Z) - Deep Learning-based surrogate models for parametrized PDEs: handling
geometric variability through graph neural networks [0.0]
This work explores the potential usage of graph neural networks (GNNs) for the simulation of time-dependent PDEs.
We propose a systematic strategy to build surrogate models based on a data-driven time-stepping scheme.
We show that GNNs can provide a valid alternative to traditional surrogate models in terms of computational efficiency and generalization to new scenarios.
arXiv Detail & Related papers (2023-08-03T08:14:28Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - Physics-aware deep learning framework for linear elasticity [0.0]
The paper presents an efficient and robust data-driven deep learning (DL) computational framework for linear continuum elasticity problems.
For an accurate representation of the field variables, a multi-objective loss function is proposed.
Several benchmark problems including the Airimaty solution to elasticity and the Kirchhoff-Love plate problem are solved.
arXiv Detail & Related papers (2023-02-19T20:33:32Z) - Generalized Neural Closure Models with Interpretability [28.269731698116257]
We develop a novel and versatile methodology of unified neural partial delay differential equations.
We augment existing/low-fidelity dynamical models directly in their partial differential equation (PDE) forms with both Markovian and non-Markovian neural network (NN) closure parameterizations.
We demonstrate the new generalized neural closure models (gnCMs) framework using four sets of experiments based on advecting nonlinear waves, shocks, and ocean acidification models.
arXiv Detail & Related papers (2023-01-15T21:57:43Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Learning Physical Dynamics with Subequivariant Graph Neural Networks [99.41677381754678]
Graph Neural Networks (GNNs) have become a prevailing tool for learning physical dynamics.
Physical laws abide by symmetry, which is a vital inductive bias accounting for model generalization.
Our model achieves on average over 3% enhancement in contact prediction accuracy across 8 scenarios on Physion and 2X lower rollout MSE on RigidFall.
arXiv Detail & Related papers (2022-10-13T10:00:30Z) - Equivariant Graph Mechanics Networks with Constraints [83.38709956935095]
We propose Graph Mechanics Network (GMN) which is efficient, equivariant and constraint-aware.
GMN represents, by generalized coordinates, the forward kinematics information (positions and velocities) of a structural object.
Extensive experiments support the advantages of GMN compared to the state-of-the-art GNNs in terms of prediction accuracy, constraint satisfaction and data efficiency.
arXiv Detail & Related papers (2022-03-12T14:22:14Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.