Discovering Message Passing Hierarchies for Mesh-Based Physics Simulation
- URL: http://arxiv.org/abs/2410.03779v1
- Date: Thu, 3 Oct 2024 15:18:00 GMT
- Title: Discovering Message Passing Hierarchies for Mesh-Based Physics Simulation
- Authors: Huayu Deng, Xiangming Zhu, Yunbo Wang, Xiaokang Yang,
- Abstract summary: We introduce DHMP, which learns Dynamic Hierarchies for Message Passing networks through a differentiable node selection method.
Our experiments demonstrate the effectiveness of DHMP, achieving 22.7% improvement on average compared to recent fixed-hierarchy message passing networks.
- Score: 61.89682310797067
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks have emerged as a powerful tool for large-scale mesh-based physics simulation. Existing approaches primarily employ hierarchical, multi-scale message passing to capture long-range dependencies within the graph. However, these graph hierarchies are typically fixed and manually designed, which do not adapt to the evolving dynamics present in complex physical systems. In this paper, we introduce a novel neural network named DHMP, which learns Dynamic Hierarchies for Message Passing networks through a differentiable node selection method. The key component is the anisotropic message passing mechanism, which operates at both intra-level and inter-level interactions. Unlike existing methods, it first supports directionally non-uniform aggregation of dynamic features between adjacent nodes within each graph hierarchy. Second, it determines node selection probabilities for the next hierarchy according to different physical contexts, thereby creating more flexible message shortcuts for learning remote node relations. Our experiments demonstrate the effectiveness of DHMP, achieving 22.7% improvement on average compared to recent fixed-hierarchy message passing networks across five classic physics simulation datasets.
Related papers
- Online Learning Of Expanding Graphs [14.952056744888916]
This paper addresses the problem of online network inference for expanding graphs from a stream of signals.
We introduce a strategy that enables different types of updates for nodes that just joined the network and for previously existing nodes.
arXiv Detail & Related papers (2024-09-13T09:20:42Z) - Learning Flexible Body Collision Dynamics with Hierarchical Contact Mesh Transformer [19.042551780033765]
We present Hierarchical Contact Mesh Transformer (HCMT), which uses hierarchical mesh structures and can learn long-range dependencies.
HCMT enables long-range interactions, and the hierarchical mesh structure quickly propagates collision effects to faraway positions.
Our results show that HCMT provides significant performance improvements over existing methods.
arXiv Detail & Related papers (2023-12-19T05:30:08Z) - Accelerating Dynamic Network Embedding with Billions of Parameter
Updates to Milliseconds [27.98359191399847]
We propose a novel dynamic network embedding paradigm that rotates and scales the axes of embedding space instead of a node-by-node update.
Specifically, we propose the Dynamic Adjacency Matrix Factorization (DAMF) algorithm, which achieves an efficient and accurate dynamic network embedding.
Experiments of node classification, link prediction, and graph reconstruction on different-sized dynamic graphs suggest that DAMF advances dynamic network embedding.
arXiv Detail & Related papers (2023-06-15T09:02:17Z) - WLD-Reg: A Data-dependent Within-layer Diversity Regularizer [98.78384185493624]
Neural networks are composed of multiple layers arranged in a hierarchical structure jointly trained with a gradient-based optimization.
We propose to complement this traditional 'between-layer' feedback with additional 'within-layer' feedback to encourage the diversity of the activations within the same layer.
We present an extensive empirical study confirming that the proposed approach enhances the performance of several state-of-the-art neural network models in multiple tasks.
arXiv Detail & Related papers (2023-01-03T20:57:22Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Equivariant Graph Hierarchy-Based Neural Networks [53.60804845045526]
We propose Equivariant Hierarchy-based Graph Networks (EGHNs)
EGHNs consist of the three key components: generalized Equivariant Matrix Message Passing (EMMP), E-Pool and E-UpPool.
Considerable experimental evaluations verify the effectiveness of our EGHN on several applications including multi-object dynamics simulation, motion capture, and protein dynamics modeling.
arXiv Detail & Related papers (2022-02-22T03:11:47Z) - Hierarchical Graph Neural Networks [0.0]
This paper aims to connect the dots between the traditional Neural Network and the Graph Neural Network architectures.
A Hierarchical Graph Neural Network architecture is proposed, supplementing the original input network layer with the hierarchy of auxiliary network layers.
It enables simultaneous learning of the individual node features along with the aggregated network features at variable resolution and uses them to improve the convergence and stability of the individual node feature learning.
arXiv Detail & Related papers (2021-05-07T16:47:18Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - Dynamic Hierarchical Mimicking Towards Consistent Optimization
Objectives [73.15276998621582]
We propose a generic feature learning mechanism to advance CNN training with enhanced generalization ability.
Partially inspired by DSN, we fork delicately designed side branches from the intermediate layers of a given neural network.
Experiments on both category and instance recognition tasks demonstrate the substantial improvements of our proposed method.
arXiv Detail & Related papers (2020-03-24T09:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.