CORGI: GNNs with Convolutional Residual Global Interactions for Lagrangian Simulation
- URL: http://arxiv.org/abs/2511.22938v1
- Date: Fri, 28 Nov 2025 07:26:35 GMT
- Title: CORGI: GNNs with Convolutional Residual Global Interactions for Lagrangian Simulation
- Authors: Ethan Ji, Yuanzhou Chen, Arush Ramteke, Fang Sun, Tianrun Yu, Jai Parera, Wei Wang, Yizhou Sun,
- Abstract summary: We introduce Convolutional Residual Global Interactions (CORGI), a hybrid architecture that augments any GNN-based solver with a lightweight Eulerian component for global context aggregation.<n>When applied to a GNS backbone, CORGI achieves a 57% improvement in rollout accuracy with only 13% more inference time and 31% more training time.
- Score: 32.83908739007555
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Partial differential equations (PDEs) are central to dynamical systems modeling, particularly in hydrodynamics, where traditional solvers often struggle with nonlinearity and computational cost. Lagrangian neural surrogates such as GNS and SEGNN have emerged as strong alternatives by learning from particle-based simulations. However, these models typically operate with limited receptive fields, making them inaccurate for capturing the inherently global interactions in fluid flows. Motivated by this observation, we introduce Convolutional Residual Global Interactions (CORGI), a hybrid architecture that augments any GNN-based solver with a lightweight Eulerian component for global context aggregation. By projecting particle features onto a grid, applying convolutional updates, and mapping them back to the particle domain, CORGI captures long-range dependencies without significant overhead. When applied to a GNS backbone, CORGI achieves a 57% improvement in rollout accuracy with only 13% more inference time and 31% more training time. Compared to SEGNN, CORGI improves accuracy by 49% while reducing inference time by 48% and training time by 30%. Even under identical runtime constraints, CORGI outperforms GNS by 47% on average, highlighting its versatility and performance on varied compute budgets.
Related papers
- Resonant Sparse Geometry Networks [0.0]
We introduce Resonant Sparse Geometry Networks (RSGN), a brain-inspired architecture with self-organizing sparse hierarchical input-dependent connectivity.<n> RSGN embeds computational nodes in learned hyperbolic space where connection strength decays with geodesic distance, achieving dynamic sparsity that adapts to each input.
arXiv Detail & Related papers (2026-01-26T01:45:51Z) - Improving Long-Range Interactions in Graph Neural Simulators via Hamiltonian Dynamics [71.53370807809296]
Recent Graph Neural Simulators (GNSs) accelerate simulations by learning dynamics on graph-structured data.<n>We propose Information-preserving Graph Neural Simulators (IGNS), a graph-based neural simulator built on the principles of Hamiltonian dynamics.<n>IGNS consistently outperforms state-of-the-art GNSs, achieving higher accuracy and stability under challenging and complex dynamical systems.
arXiv Detail & Related papers (2025-11-11T12:53:56Z) - ReLACE: A Resource-Efficient Low-Latency Cortical Acceleration Engine [0.0]
We present a Cortical Neural Pool architecture featuring a CORDIC-based Hodgkin Huxley (RCHH) neuron model.<n>The FPGA implementation of the RCHH neuron shows 24.5% LUT reduction and 35.2% improved speed.<n>The design shows biologically accurate, low-resource spiking neural network implementations for resource-constrained edge AI applications.
arXiv Detail & Related papers (2025-10-20T10:33:50Z) - JEDI-linear: Fast and Efficient Graph Neural Networks for Jet Tagging on FPGAs [36.158374493924455]
Graph Neural Networks (GNNs) have shown exceptional performance for jet tagging at the CERN High Luminosity Large Hadron Collider (HLLHC)<n>We propose JEDI-linear, a novel GNN architecture with linear computational complexity that eliminates explicit pairwise interactions.<n>This is the first interaction-based GNN to achieve less than 60ns latency and currently meets the requirements for use in the HL-LHC CMS Level-1 trigger system.
arXiv Detail & Related papers (2025-08-21T11:40:49Z) - Research on Low-Latency Inference and Training Efficiency Optimization for Graph Neural Network and Large Language Model-Based Recommendation Systems [4.633338944734091]
This study considers computational bottlenecks involved in hybrid Graph Neural Network (GNN) and Large Language Model (LLM)-based recommender systems (ReS)<n>It recommends the use of FPGA as well as LoRA for real-time deployment.
arXiv Detail & Related papers (2025-06-21T03:10:50Z) - Efficient Mixed Precision Quantization in Graph Neural Networks [7.161966906570077]
Graph Neural Networks (GNNs) have become essential for handling large-scale graph applications.<n>Mixed precision quantization emerges as a promising solution to enhance the efficiency of GNN architectures.
arXiv Detail & Related papers (2025-05-14T13:11:39Z) - RelGNN: Composite Message Passing for Relational Deep Learning [56.48834369525997]
We introduce RelGNN, a novel GNN framework specifically designed to leverage the unique structural characteristics of the graphs built from relational databases.<n>RelGNN is evaluated on 30 diverse real-world tasks from Relbench (Fey et al., 2024), and achieves state-of-the-art performance on the vast majority tasks, with improvements of up to 25%.
arXiv Detail & Related papers (2025-02-10T18:58:40Z) - Leveraging Frequency Domain Learning in 3D Vessel Segmentation [50.54833091336862]
In this study, we leverage Fourier domain learning as a substitute for multi-scale convolutional kernels in 3D hierarchical segmentation models.
We show that our novel network achieves remarkable dice performance (84.37% on ASACA500 and 80.32% on ImageCAS) in tubular vessel segmentation tasks.
arXiv Detail & Related papers (2024-01-11T19:07:58Z) - Three-dimensional granular flow simulation using graph neural
network-based learned simulator [2.153852088624324]
We use a graph neural network (GNN) to develop a simulator for granular flows.
The simulator reproduces the overall behaviors of column collapses with various aspect ratios.
The speed of GNS outperforms high-fidelity numerical simulators by 300 times.
arXiv Detail & Related papers (2023-11-13T15:54:09Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Transformer with Implicit Edges for Particle-based Physics Simulation [135.77656965678196]
Transformer with Implicit Edges (TIE) captures the rich semantics of particle interactions in an edge-free manner.
We evaluate our model on diverse domains of varying complexity and materials.
arXiv Detail & Related papers (2022-07-22T03:45:29Z) - Learning to Solve Combinatorial Graph Partitioning Problems via
Efficient Exploration [72.15369769265398]
Experimentally, ECORD achieves a new SOTA for RL algorithms on the Maximum Cut problem.
Compared to the nearest competitor, ECORD reduces the optimality gap by up to 73%.
arXiv Detail & Related papers (2022-05-27T17:13:10Z) - Low-Precision Training in Logarithmic Number System using Multiplicative
Weight Update [49.948082497688404]
Training large-scale deep neural networks (DNNs) currently requires a significant amount of energy, leading to serious environmental impacts.
One promising approach to reduce the energy costs is representing DNNs with low-precision numbers.
We jointly design a lowprecision training framework involving a logarithmic number system (LNS) and a multiplicative weight update training method, termed LNS-Madam.
arXiv Detail & Related papers (2021-06-26T00:32:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.