MeshGraphNet-Transformer: Scalable Mesh-based Learned Simulation for Solid Mechanics
- URL: http://arxiv.org/abs/2601.23177v3
- Date: Thu, 05 Feb 2026 10:55:13 GMT
- Title: MeshGraphNet-Transformer: Scalable Mesh-based Learned Simulation for Solid Mechanics
- Authors: Mikel M. Iparraguirre, Iciar Alfaro, David Gonzalez, Elias Cueto,
- Abstract summary: We present MeshGraphNet-Transformer (MGN-T), a novel architecture that combines the global modeling capabilities of Transformers with the geometric inductive bias of MeshGraphNets.<n>MGN-T overcomes a key limitation of standard MGN, the inefficient long-range information propagation caused by iterative message passing on large, high-resolution meshes.<n>We demonstrate that MGN-T successfully handles industrial-scale meshes for impact dynamics, a setting in which standard MGN fails due message-passing under-reaching.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present MeshGraphNet-Transformer (MGN-T), a novel architecture that combines the global modeling capabilities of Transformers with the geometric inductive bias of MeshGraphNets, while preserving a mesh-based graph representation. MGN-T overcomes a key limitation of standard MGN, the inefficient long-range information propagation caused by iterative message passing on large, high-resolution meshes. A physics-attention Transformer serves as a global processor, updating all nodal states simultaneously while explicitly retaining node and edge attributes. By directly capturing long-range physical interactions, MGN-T eliminates the need for deep message-passing stacks or hierarchical, coarsened meshes, enabling efficient learning on high-resolution meshes with varying geometries, topologies, and boundary conditions at an industrial scale. We demonstrate that MGN-T successfully handles industrial-scale meshes for impact dynamics, a setting in which standard MGN fails due message-passing under-reaching. The method accurately models self-contact, plasticity, and multivariate outputs, including internal, phenomenological plastic variables. Moreover, MGN-T outperforms state-of-the-art approaches on classical benchmarks, achieving higher accuracy while maintaining practical efficiency, using only a fraction of the parameters required by competing baselines.
Related papers
- Plain Transformers are Surprisingly Powerful Link Predictors [57.01966734467712]
Link prediction is a core challenge in graph machine learning, demanding models that capture rich and complex topological dependencies.<n>While Graph Neural Networks (GNNs) are the standard solution, state-of-the-art pipelines often rely on explicit structurals or memory-intensive node embeddings.<n>We present PENCIL, an encoder-only plain Transformer that replaces hand-crafted priors with attention over sampled local subgraphs.
arXiv Detail & Related papers (2026-02-02T02:45:52Z) - GEM+: Scalable State-of-the-Art Private Synthetic Data with Generator Networks [9.432150710329607]
We introduce GEM+, which integrates AIM's adaptive measurement framework with GEM's scalable generator network.<n>Our experiments show that GEM+ outperforms AIM in both utility and scalability, delivering state-of-the-art results.
arXiv Detail & Related papers (2025-11-12T19:18:43Z) - GILT: An LLM-Free, Tuning-Free Graph Foundational Model for In-Context Learning [50.40400074353263]
Graph Neural Networks (GNNs) are powerful tools for precessing relational data but often struggle to generalize to unseen graphs.<n>We introduce textbfGraph textbfIn-context textbfL textbfTransformer (GILT), a framework built on an LLM-free and tuning-free architecture.
arXiv Detail & Related papers (2025-10-06T08:09:15Z) - Physics-Informed Graph Neural Networks for Transverse Momentum Estimation in CMS Trigger Systems [0.0]
Real-time particle transverse momentum ($p_T$) estimation in high-energy physics demands efficient algorithms under strict hardware constraints.<n>We propose a physics-informed graph neural network (GNN) framework that systematically encodes detector geometry and physical observables.<n>Our co-design methodology yields superior accuracy-efficiency trade-offs compared to existing baselines.
arXiv Detail & Related papers (2025-07-25T12:19:57Z) - AMBER: Adaptive Mesh Generation by Iterative Mesh Resolution Prediction [48.72179728638418]
We propose Adaptive Meshing By Expert Reconstruction (AMBER), a supervised learning approach to mesh adaptation.<n>AMBER iteratively predicts the sizing field, and uses this prediction to produce a new intermediate mesh using an out-of-the-box mesh generator.<n>We evaluate AMBER on 2D and 3D geometries, datasets including classical physics problems, mechanical components, and real-world industrial designs with human expert meshes.
arXiv Detail & Related papers (2025-05-29T17:10:44Z) - Scalable Graph Generative Modeling via Substructure Sequences [50.32639806800683]
We introduce Generative Graph Pattern Machine (G$2$PM), a generative Transformer pre-training framework for graphs.<n>G$2$PM represents graph instances (nodes, edges, or entire graphs) as sequences of substructures.<n>It employs generative pre-training over the sequences to learn generalizable and transferable representations.
arXiv Detail & Related papers (2025-05-22T02:16:34Z) - Extended Short- and Long-Range Mesh Learning for Fast and Generalized Garment Simulation [15.769706073808031]
3D garment simulation is a critical component for producing cloth-based graphics.<n>Recent advancements in graph neural networks (GNNs) offer a promising approach for efficient garment simulation.<n>We devise a novel GNN-based mesh learning framework with two key components to extend the message-passing range with minimal overhead.
arXiv Detail & Related papers (2025-04-16T04:56:01Z) - Instruction-Guided Autoregressive Neural Network Parameter Generation [49.800239140036496]
We propose IGPG, an autoregressive framework that unifies parameter synthesis across diverse tasks and architectures.<n>By autoregressively generating neural network weights' tokens, IGPG ensures inter-layer coherence and enables efficient adaptation across models and datasets.<n>Experiments on multiple datasets demonstrate that IGPG consolidates diverse pretrained models into a single, flexible generative framework.
arXiv Detail & Related papers (2025-04-02T05:50:19Z) - Scalable Message Passing Neural Networks: No Need for Attention in Large Graph Representation Learning [15.317501970096743]
We show that by integrating standard convolutional message passing into a Pre-Layer Normalization Transformer-style block instead of attention, we can produce high-performing deep message-passing-based Graph Neural Networks (GNNs)
Results are competitive with the state-of-the-art in large graph transductive learning, without requiring the otherwise computationally and memory-expensive attention mechanism.
arXiv Detail & Related papers (2024-10-29T17:18:43Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.