E($3$) Equivariant Graph Neural Networks for Particle-Based Fluid
Mechanics
- URL: http://arxiv.org/abs/2304.00150v1
- Date: Fri, 31 Mar 2023 21:56:35 GMT
- Title: E($3$) Equivariant Graph Neural Networks for Particle-Based Fluid
Mechanics
- Authors: Artur P. Toshev, Gianluca Galletti, Johannes Brandstetter, Stefan
Adami and Nikolaus A. Adams
- Abstract summary: We demonstrate that equivariant graph neural networks have the potential to learn more accurate dynamic-interaction models.
We benchmark two well-studied fluid flow systems, namely the 3D decaying Taylor-Green vortex and the 3D reverse Poiseuille flow.
- Score: 2.1401663582288144
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We contribute to the vastly growing field of machine learning for engineering
systems by demonstrating that equivariant graph neural networks have the
potential to learn more accurate dynamic-interaction models than their
non-equivariant counterparts. We benchmark two well-studied fluid flow systems,
namely the 3D decaying Taylor-Green vortex and the 3D reverse Poiseuille flow,
and compare equivariant graph neural networks to their non-equivariant
counterparts on different performance measures, such as kinetic energy or
Sinkhorn distance. Such measures are typically used in engineering to validate
numerical solvers. Our main findings are that while being rather slow to train
and evaluate, equivariant models learn more physically accurate interactions.
This indicates opportunities for future work towards coarse-grained models for
turbulent flows, and generalization across system dynamics and parameters.
Related papers
- Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Equivariant Graph Neural Operator for Modeling 3D Dynamics [148.98826858078556]
We propose Equivariant Graph Neural Operator (EGNO) to directly models dynamics as trajectories instead of just next-step prediction.
EGNO explicitly learns the temporal evolution of 3D dynamics where we formulate the dynamics as a function over time and learn neural operators to approximate it.
Comprehensive experiments in multiple domains, including particle simulations, human motion capture, and molecular dynamics, demonstrate the significantly superior performance of EGNO against existing methods.
arXiv Detail & Related papers (2024-01-19T21:50:32Z) - SEGNO: Generalizing Equivariant Graph Neural Networks with Physical
Inductive Biases [66.61789780666727]
We show how the second-order continuity can be incorporated into GNNs while maintaining the equivariant property.
We also offer theoretical insights into SEGNO, highlighting that it can learn a unique trajectory between adjacent states.
Our model yields a significant improvement over the state-of-the-art baselines.
arXiv Detail & Related papers (2023-08-25T07:15:58Z) - Learning Lagrangian Fluid Mechanics with E($3$)-Equivariant Graph Neural
Networks [2.1401663582288144]
equivariant graph neural networks have the potential to learn more accurate dynamic-interaction models.
We benchmark two well-studied fluid-flow systems, namely 3D decaying Taylor-Green vortex and 3D reverse Poiseuille flow.
We find that while currently being rather slow to train and evaluate, equivariant models with our proposed history embeddings learn more accurate physical interactions.
arXiv Detail & Related papers (2023-05-24T22:26:38Z) - Unravelling the Performance of Physics-informed Graph Neural Networks
for Dynamical Systems [5.787429262238507]
We evaluate the performance of graph neural networks (GNNs) and their variants with explicit constraints and different architectures.
Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance.
All the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.
arXiv Detail & Related papers (2022-11-10T12:29:30Z) - Learning Physical Dynamics with Subequivariant Graph Neural Networks [99.41677381754678]
Graph Neural Networks (GNNs) have become a prevailing tool for learning physical dynamics.
Physical laws abide by symmetry, which is a vital inductive bias accounting for model generalization.
Our model achieves on average over 3% enhancement in contact prediction accuracy across 8 scenarios on Physion and 2X lower rollout MSE on RigidFall.
arXiv Detail & Related papers (2022-10-13T10:00:30Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - E(n) Equivariant Graph Neural Networks [86.75170631724548]
This paper introduces a new model to learn graph neural networks equivariant to rotations, translations, reflections and permutations called E(n)-Equivariant Graph Neural Networks (EGNNs)
In contrast with existing methods, our work does not require computationally expensive higher-order representations in intermediate layers while it still achieves competitive or better performance.
arXiv Detail & Related papers (2021-02-19T10:25:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.