MaNGO - Adaptable Graph Network Simulators via Meta-Learning
- URL: http://arxiv.org/abs/2510.05874v2
- Date: Wed, 22 Oct 2025 08:39:57 GMT
- Title: MaNGO - Adaptable Graph Network Simulators via Meta-Learning
- Authors: Philipp Dahlinger, Tai Hoang, Denis Blessing, Niklas Freymuth, Gerhard Neumann,
- Abstract summary: Graph Network Simulators (GNSs) offer faster inference but suffer from two key limitations.<n>They must be retrained from scratch for even minor variations in physical parameters.<n>This is inefficient, as simulations with varying parameters often share a common underlying latent structure.<n>We propose a novel architecture that generates a latent representation by encoding graph trajectories using conditional neural processes.
- Score: 25.80703650406406
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurately simulating physics is crucial across scientific domains, with applications spanning from robotics to materials science. While traditional mesh-based simulations are precise, they are often computationally expensive and require knowledge of physical parameters, such as material properties. In contrast, data-driven approaches like Graph Network Simulators (GNSs) offer faster inference but suffer from two key limitations: Firstly, they must be retrained from scratch for even minor variations in physical parameters, and secondly they require labor-intensive data collection for each new parameter setting. This is inefficient, as simulations with varying parameters often share a common underlying latent structure. In this work, we address these challenges by learning this shared structure through meta-learning, enabling fast adaptation to new physical parameters without retraining. To this end, we propose a novel architecture that generates a latent representation by encoding graph trajectories using conditional neural processes (CNPs). To mitigate error accumulation over time, we combine CNPs with a novel neural operator architecture. We validate our approach, Meta Neural Graph Operator (MaNGO), on several dynamics prediction tasks with varying material properties, demonstrating superior performance over existing GNS methods. Notably, MaNGO achieves accuracy on unseen material properties close to that of an oracle model.
Related papers
- Improving Long-Range Interactions in Graph Neural Simulators via Hamiltonian Dynamics [71.53370807809296]
Recent Graph Neural Simulators (GNSs) accelerate simulations by learning dynamics on graph-structured data.<n>We propose Information-preserving Graph Neural Simulators (IGNS), a graph-based neural simulator built on the principles of Hamiltonian dynamics.<n>IGNS consistently outperforms state-of-the-art GNSs, achieving higher accuracy and stability under challenging and complex dynamical systems.
arXiv Detail & Related papers (2025-11-11T12:53:56Z) - Parameter-Efficient Conditioning for Material Generalization in Graph-Based Simulators [2.504298819189614]
Graph network-based simulators (GNS) have demonstrated strong potential for learning particle-based physics.<n>Existing models are typically trained for a single material type and fail to generalize across distinct behaviors.<n>We propose a parameter-efficient conditioning mechanism that makes the GNS model adaptive to material parameters.
arXiv Detail & Related papers (2025-11-07T17:55:35Z) - Context-aware Learned Mesh-based Simulation via Trajectory-Level Meta-Learning [20.72669976554826]
Learned Graph Network Simulators (GNSs) offer a promising alternative to traditional mesh-based physics simulators.<n>Their speed and inherent differentiability make them well suited for applications that require fast and accurate simulations, such as robotic manipulation or manufacturing optimization.<n>We utilize movement primitives to directly predict fast, stable and accurate simulations from a single model call.
arXiv Detail & Related papers (2025-11-07T13:34:02Z) - Facet: highly efficient E(3)-equivariant networks for interatomic potentials [6.741915610607818]
Computational materials discovery is limited by the high cost of first-principles calculations.<n>Machine learning potentials that predict energies from crystal structures are promising, but existing methods face computational bottlenecks.<n>We present Facet, a GNN architecture for efficient ML potentials.
arXiv Detail & Related papers (2025-09-10T09:06:24Z) - Instruction-Guided Autoregressive Neural Network Parameter Generation [49.800239140036496]
We propose IGPG, an autoregressive framework that unifies parameter synthesis across diverse tasks and architectures.<n>By autoregressively generating neural network weights' tokens, IGPG ensures inter-layer coherence and enables efficient adaptation across models and datasets.<n>Experiments on multiple datasets demonstrate that IGPG consolidates diverse pretrained models into a single, flexible generative framework.
arXiv Detail & Related papers (2025-04-02T05:50:19Z) - MIXPINN: Mixed-Material Simulations by Physics-Informed Neural Network [1.275845610262865]
Traditional Finite Element Method (FEM)-based simulations are computationally expensive and impractical for real-time scenarios.<n>We introduce MIXPINN, a physics-informed Graph Neural Network (GNN) framework for mixed-material simulations.<n>By leveraging a graph-based representation of biomechanical structures, MIXPINN learns high-fidelity deformations from FEM-generated data and achieves real-time inference with sub-millimeter accuracy.
arXiv Detail & Related papers (2025-03-17T12:48:29Z) - Physics-informed MeshGraphNets (PI-MGNs): Neural finite element solvers
for non-stationary and nonlinear simulations on arbitrary meshes [13.41003911618347]
This work introduces PI-MGNs, a hybrid approach that combines PINNs and MGNs to solve non-stationary and nonlinear partial differential equations (PDEs) on arbitrary meshes.
Results show that the model scales well to large and complex meshes, although it is trained on small generic meshes only.
arXiv Detail & Related papers (2024-02-16T13:34:51Z) - Latent Task-Specific Graph Network Simulators [16.881339139068018]
Graph Network Simulators (GNSs) pose an efficient alternative to traditional physics-based simulators.
We frame mesh-based simulation as a meta-learning problem and use a recent Bayesian meta-learning method to improve GNSs adaptability to new scenarios.
We validate the effectiveness of our approach through various experiments, performing on par with or better than established baseline methods.
arXiv Detail & Related papers (2023-11-09T10:30:51Z) - Gradual Optimization Learning for Conformational Energy Minimization [69.36925478047682]
Gradual Optimization Learning Framework (GOLF) for energy minimization with neural networks significantly reduces the required additional data.
Our results demonstrate that the neural network trained with GOLF performs on par with the oracle on a benchmark of diverse drug-like molecules.
arXiv Detail & Related papers (2023-11-05T11:48:08Z) - Incremental Online Learning Algorithms Comparison for Gesture and Visual
Smart Sensors [68.8204255655161]
This paper compares four state-of-the-art algorithms in two real applications: gesture recognition based on accelerometer data and image classification.
Our results confirm these systems' reliability and the feasibility of deploying them in tiny-memory MCUs.
arXiv Detail & Related papers (2022-09-01T17:05:20Z) - Towards Quantum Graph Neural Networks: An Ego-Graph Learning Approach [47.19265172105025]
We propose a novel hybrid quantum-classical algorithm for graph-structured data, which we refer to as the Ego-graph based Quantum Graph Neural Network (egoQGNN)
egoQGNN implements the GNN theoretical framework using the tensor product and unity matrix representation, which greatly reduces the number of model parameters required.
The architecture is based on a novel mapping from real-world data to Hilbert space.
arXiv Detail & Related papers (2022-01-13T16:35:45Z) - Conditionally Parameterized, Discretization-Aware Neural Networks for
Mesh-Based Modeling of Physical Systems [0.0]
We generalize the idea of conditional parametrization -- using trainable functions of input parameters.
We show that conditionally parameterized networks provide superior performance compared to their traditional counterparts.
A network architecture named CP-GNet is also proposed as the first deep learning model capable of reacting standalone prediction of flows on meshes.
arXiv Detail & Related papers (2021-09-15T20:21:13Z) - GradInit: Learning to Initialize Neural Networks for Stable and
Efficient Training [59.160154997555956]
We present GradInit, an automated and architecture method for initializing neural networks.
It is based on a simple agnostic; the variance of each network layer is adjusted so that a single step of SGD or Adam results in the smallest possible loss value.
It also enables training the original Post-LN Transformer for machine translation without learning rate warmup.
arXiv Detail & Related papers (2021-02-16T11:45:35Z) - Learning to Simulate Complex Physics with Graph Networks [68.43901833812448]
We present a machine learning framework and model implementation that can learn to simulate a wide variety of challenging physical domains.
Our framework---which we term "Graph Network-based Simulators" (GNS)--represents the state of a physical system with particles, expressed as nodes in a graph, and computes dynamics via learned message-passing.
Our results show that our model can generalize from single-timestep predictions with thousands of particles during training, to different initial conditions, thousands of timesteps, and at least an order of magnitude more particles at test time.
arXiv Detail & Related papers (2020-02-21T16:44:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.