MultiScale MeshGraphNets
- URL: http://arxiv.org/abs/2210.00612v1
- Date: Sun, 2 Oct 2022 20:16:20 GMT
- Title: MultiScale MeshGraphNets
- Authors: Meire Fortunato, Tobias Pfaff, Peter Wirnsberger, Alexander Pritzel,
Peter Battaglia
- Abstract summary: We propose two complementary approaches to improve the framework from MeshGraphNets.
First, we demonstrate that it is possible to learn accurate surrogate dynamics of a high-resolution system on a much coarser mesh.
Second, we introduce a hierarchical approach (MultiScale MeshGraphNets) which passes messages on two different resolutions.
- Score: 65.26373813797409
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, there has been a growing interest in using machine learning
to overcome the high cost of numerical simulation, with some learned models
achieving impressive speed-ups over classical solvers whilst maintaining
accuracy. However, these methods are usually tested at low-resolution settings,
and it remains to be seen whether they can scale to the costly high-resolution
simulations that we ultimately want to tackle.
In this work, we propose two complementary approaches to improve the
framework from MeshGraphNets, which demonstrated accurate predictions in a
broad range of physical systems. MeshGraphNets relies on a message passing
graph neural network to propagate information, and this structure becomes a
limiting factor for high-resolution simulations, as equally distant points in
space become further apart in graph space. First, we demonstrate that it is
possible to learn accurate surrogate dynamics of a high-resolution system on a
much coarser mesh, both removing the message passing bottleneck and improving
performance; and second, we introduce a hierarchical approach (MultiScale
MeshGraphNets) which passes messages on two different resolutions (fine and
coarse), significantly improving the accuracy of MeshGraphNets while requiring
less computational resources.
Related papers
- X-MeshGraphNet: Scalable Multi-Scale Graph Neural Networks for Physics Simulation [0.0]
We introduce X-MeshGraphNet, a scalable, multi-scale extension of MeshGraphNet.
X-MeshGraphNet overcomes the scalability bottleneck by partitioning large graphs and halo regions.
Our experiments demonstrate that X-MeshGraphNet maintains the predictive accuracy of full-graph GNNs.
arXiv Detail & Related papers (2024-11-26T07:10:05Z) - A Graph Neural Network Approach for Temporal Mesh Blending and
Correspondence [18.466814193413487]
Red-Blue MPNN is a novel graph neural network that processes an augmented graph to estimate the correspondence.
We create a large-scale synthetic dataset consisting of temporal sequences of human meshes in motion.
arXiv Detail & Related papers (2023-06-23T11:47:30Z) - MAgNET: A Graph U-Net Architecture for Mesh-Based Simulations [0.5185522256407782]
We present MAgNET, which extends the well-known convolutional neural networks to accommodate arbitrary graph-structured data.
We demonstrate the predictive capabilities of MAgNET in surrogate modeling for non-linear finite element simulations in the mechanics of solids.
arXiv Detail & Related papers (2022-11-01T19:23:45Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - GNNAutoScale: Scalable and Expressive Graph Neural Networks via
Historical Embeddings [51.82434518719011]
GNNAutoScale (GAS) is a framework for scaling arbitrary message-passing GNNs to large graphs.
Gas prunes entire sub-trees of the computation graph by utilizing historical embeddings from prior training iterations.
Gas reaches state-of-the-art performance on large-scale graphs.
arXiv Detail & Related papers (2021-06-10T09:26:56Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Not Half Bad: Exploring Half-Precision in Graph Convolutional Neural
Networks [8.460826851547294]
efficient graph analysis using modern machine learning is receiving a growing level of attention.
Deep learning approaches often operate over the entire adjacency matrix.
It is desirable to identify efficient measures to reduce both run-time and memory requirements.
arXiv Detail & Related papers (2020-10-23T19:47:42Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z) - Towards an Efficient and General Framework of Robust Training for Graph
Neural Networks [96.93500886136532]
Graph Neural Networks (GNNs) have made significant advances on several fundamental inference tasks.
Despite GNNs' impressive performance, it has been observed that carefully crafted perturbations on graph structures lead them to make wrong predictions.
We propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs.
arXiv Detail & Related papers (2020-02-25T15:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.