Scientific Computing Algorithms to Learn Enhanced Scalable Surrogates
for Mesh Physics
- URL: http://arxiv.org/abs/2304.00338v1
- Date: Sat, 1 Apr 2023 15:42:18 GMT
- Title: Scientific Computing Algorithms to Learn Enhanced Scalable Surrogates
for Mesh Physics
- Authors: Brian R. Bartoldson, Yeping Hu, Amar Saini, Jose Cadena, Yucheng Fu,
Jie Bao, Zhijie Xu, Brenda Ng, Phan Nguyen
- Abstract summary: MeshGraphNets (MGN) is a subclass of GNNs for mesh-based physics modeling.
We train MGN on meshes with textitmillions of nodes to generate computational fluid dynamics simulations.
This work presents a practical path to scaling MGN for real-world applications.
- Score: 6.360914973656273
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data-driven modeling approaches can produce fast surrogates to study
large-scale physics problems. Among them, graph neural networks (GNNs) that
operate on mesh-based data are desirable because they possess inductive biases
that promote physical faithfulness, but hardware limitations have precluded
their application to large computational domains. We show that it is
\textit{possible} to train a class of GNN surrogates on 3D meshes. We scale
MeshGraphNets (MGN), a subclass of GNNs for mesh-based physics modeling, via
our domain decomposition approach to facilitate training that is mathematically
equivalent to training on the whole domain under certain conditions. With this,
we were able to train MGN on meshes with \textit{millions} of nodes to generate
computational fluid dynamics (CFD) simulations. Furthermore, we show how to
enhance MGN via higher-order numerical integration, which can reduce MGN's
error and training time. We validated our methods on an accompanying dataset of
3D $\text{CO}_2$-capture CFD simulations on a 3.1M-node mesh. This work
presents a practical path to scaling MGN for real-world applications.
Related papers
- Scalable and Consistent Graph Neural Networks for Distributed Mesh-based Data-driven Modeling [0.0]
This work develops a distributed graph neural network (GNN) methodology for mesh-based modeling applications.
consistency refers to the fact that a GNN trained and evaluated on one rank (one large graph) is arithmetically equivalent to evaluations on multiple ranks (a partitioned graph)
It is shown how the NekRS mesh partitioning can be linked to the distributed GNN training and inference routines, resulting in a scalable mesh-based data-driven modeling workflow.
arXiv Detail & Related papers (2024-10-02T15:22:27Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Physics-informed MeshGraphNets (PI-MGNs): Neural finite element solvers
for non-stationary and nonlinear simulations on arbitrary meshes [13.41003911618347]
This work introduces PI-MGNs, a hybrid approach that combines PINNs and MGNs to solve non-stationary and nonlinear partial differential equations (PDEs) on arbitrary meshes.
Results show that the model scales well to large and complex meshes, although it is trained on small generic meshes only.
arXiv Detail & Related papers (2024-02-16T13:34:51Z) - A foundation for exact binarized morphological neural networks [2.8925699537310137]
Training and running deep neural networks (NNs) often demands a lot of computation and energy-intensive specialized hardware.
One way to reduce the computation and power cost is to use binary weight NNs, but these are hard to train because the sign function has a non-smooth gradient.
We present a model based on Mathematical Morphology (MM), which can binarize ConvNets without losing performance under certain conditions.
arXiv Detail & Related papers (2024-01-08T11:37:44Z) - Geometry-Informed Neural Operator for Large-Scale 3D PDEs [76.06115572844882]
We propose the geometry-informed neural operator (GINO) to learn the solution operator of large-scale partial differential equations.
We successfully trained GINO to predict the pressure on car surfaces using only five hundred data points.
arXiv Detail & Related papers (2023-09-01T16:59:21Z) - MLGCN: An Ultra Efficient Graph Convolution Neural Model For 3D Point
Cloud Analysis [4.947552172739438]
We introduce a novel Multi-level Graph Convolution Neural (MLGCN) model, which uses Graph Neural Networks (GNN) blocks to extract features from 3D point clouds at specific locality levels.
Our approach produces comparable results to those of state-of-the-art models while requiring up to a thousand times fewer floating-point operations (FLOPs) and having significantly reduced storage requirements.
arXiv Detail & Related papers (2023-03-31T00:15:22Z) - Efficient Learning of Mesh-Based Physical Simulation with BSMS-GNN [36.73790892258642]
Bi-stride pools nodes on every other frontier of breadth-first search.
One-MP scheme per level and non-parametrized pooling, resembling U-Nets, significantly reduces computational costs.
arXiv Detail & Related papers (2022-10-05T21:45:16Z) - Convolutional Neural Networks on Manifolds: From Graphs and Back [122.06927400759021]
We propose a manifold neural network (MNN) composed of a bank of manifold convolutional filters and point-wise nonlinearities.
To sum up, we focus on the manifold model as the limit of large graphs and construct MNNs, while we can still bring back graph neural networks by the discretization of MNNs.
arXiv Detail & Related papers (2022-10-01T21:17:39Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - ForceNet: A Graph Neural Network for Large-Scale Quantum Calculations [86.41674945012369]
We develop a scalable and expressive Graph Neural Networks model, ForceNet, to approximate atomic forces.
Our proposed ForceNet is able to predict atomic forces more accurately than state-of-the-art physics-based GNNs.
arXiv Detail & Related papers (2021-03-02T03:09:06Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.