Efficient Learning of Mesh-Based Physical Simulation with BSMS-GNN
- URL: http://arxiv.org/abs/2210.02573v4
- Date: Mon, 19 Jun 2023 02:09:41 GMT
- Title: Efficient Learning of Mesh-Based Physical Simulation with BSMS-GNN
- Authors: Yadi Cao, Menglei Chai, Minchen Li, Chenfanfu Jiang
- Abstract summary: Bi-stride pools nodes on every other frontier of breadth-first search.
One-MP scheme per level and non-parametrized pooling, resembling U-Nets, significantly reduces computational costs.
- Score: 36.73790892258642
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning the physical simulation on large-scale meshes with flat Graph Neural
Networks (GNNs) and stacking Message Passings (MPs) is challenging due to the
scaling complexity w.r.t. the number of nodes and over-smoothing. There has
been growing interest in the community to introduce \textit{multi-scale}
structures to GNNs for physical simulation. However, current state-of-the-art
methods are limited by their reliance on the labor-intensive drawing of coarser
meshes or building coarser levels based on spatial proximity, which can
introduce wrong edges across geometry boundaries. Inspired by the bipartite
graph determination, we propose a novel pooling strategy, \textit{bi-stride} to
tackle the aforementioned limitations. Bi-stride pools nodes on every other
frontier of the breadth-first search (BFS), without the need for the manual
drawing of coarser meshes and avoiding the wrong edges by spatial proximity.
Additionally, it enables a one-MP scheme per level and non-parametrized pooling
and unpooling by interpolations, resembling U-Nets, which significantly reduces
computational costs. Experiments show that the proposed framework,
\textit{BSMS-GNN}, significantly outperforms existing methods in terms of both
accuracy and computational efficiency in representative physical simulations.
Related papers
- DeltaGNN: Graph Neural Network with Information Flow Control [5.563171090433323]
Graph Neural Networks (GNNs) are designed to process graph-structured data through neighborhood aggregations in the message passing process.
Message-passing enables GNNs to understand short-range spatial interactions, but also causes them to suffer from over-smoothing and over-squashing.
We propose a mechanism called emph information flow control to address over-smoothing and over-squashing with linear computational overhead.
We benchmark our model across 10 real-world datasets, including graphs with varying sizes, topologies, densities, and homophilic ratios, showing superior performance
arXiv Detail & Related papers (2025-01-10T14:34:20Z) - X-MeshGraphNet: Scalable Multi-Scale Graph Neural Networks for Physics Simulation [3.8363709845608365]
We introduce X-MeshGraphNet, a scalable, multi-scale extension of MeshGraphNet.
X-MeshGraphNet overcomes the scalability bottleneck by incorporating large graphs and halo regions.
Our experiments demonstrate that X-MeshGraphNet maintains the predictive accuracy of full-graph GNNs.
arXiv Detail & Related papers (2024-11-26T07:10:05Z) - Mesh-based Super-Resolution of Fluid Flows with Multiscale Graph Neural Networks [0.0]
A graph neural network (GNN) approach is introduced in this work which enables mesh-based three-dimensional super-resolution of fluid flows.
In this framework, the GNN is designed to operate not on the full mesh-based field at once, but on localized meshes of elements (or cells) directly.
arXiv Detail & Related papers (2024-09-12T05:52:19Z) - LightDiC: A Simple yet Effective Approach for Large-scale Digraph
Representation Learning [42.72417353512392]
We propose LightDiC, a scalable variant of the digraph convolution based on the magnetic Laplacian.
LightDiC is the first DiGNN to provide satisfactory results in the most representative large-scale database.
arXiv Detail & Related papers (2024-01-22T09:09:10Z) - Multicoated and Folded Graph Neural Networks with Strong Lottery Tickets [3.0894823679470087]
This paper introduces the Multi-Stage Folding and Unshared Masks methods to expand the search space in terms of both architecture and parameters.
By achieving high sparsity, competitive performance, and high memory efficiency with up to 98.7% reduction, it demonstrates suitability for energy-efficient graph processing.
arXiv Detail & Related papers (2023-12-06T02:16:44Z) - Scientific Computing Algorithms to Learn Enhanced Scalable Surrogates
for Mesh Physics [6.360914973656273]
MeshGraphNets (MGN) is a subclass of GNNs for mesh-based physics modeling.
We train MGN on meshes with textitmillions of nodes to generate computational fluid dynamics simulations.
This work presents a practical path to scaling MGN for real-world applications.
arXiv Detail & Related papers (2023-04-01T15:42:18Z) - MultiScale MeshGraphNets [65.26373813797409]
We propose two complementary approaches to improve the framework from MeshGraphNets.
First, we demonstrate that it is possible to learn accurate surrogate dynamics of a high-resolution system on a much coarser mesh.
Second, we introduce a hierarchical approach (MultiScale MeshGraphNets) which passes messages on two different resolutions.
arXiv Detail & Related papers (2022-10-02T20:16:20Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch [75.69506249886622]
Sparsity in Deep Neural Networks (DNNs) has been widely studied to compress and accelerate the models on resource-constrained environments.
In this paper, we are the first to study training from scratch an N:M fine-grained structured sparse network.
arXiv Detail & Related papers (2021-02-08T05:55:47Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Towards Efficient Scene Understanding via Squeeze Reasoning [71.1139549949694]
We propose a novel framework called Squeeze Reasoning.
Instead of propagating information on the spatial map, we first learn to squeeze the input feature into a channel-wise global vector.
We show that our approach can be modularized as an end-to-end trained block and can be easily plugged into existing networks.
arXiv Detail & Related papers (2020-11-06T12:17:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.