Domain-Decomposed Graph Neural Network Surrogate Modeling for Ice Sheets
- URL: http://arxiv.org/abs/2512.01888v1
- Date: Mon, 01 Dec 2025 17:10:09 GMT
- Title: Domain-Decomposed Graph Neural Network Surrogate Modeling for Ice Sheets
- Authors: Adrienne M. Propp, Mauro Perego, Eric C. Cyr, Anthony Gruber, Amanda A. Howard, Alexander Heinlein, Panos Stinis, Daniel M. Tartakovsky,
- Abstract summary: We develop a physics-inspired graph neural network (GNN) surrogate that operates directly on unstructured meshes.<n>We employ transfer learning to fine-tune models across velocities, accelerating training and improving accuracy in data-limited settings.<n>Our results demonstrate that graph-based DD, combined with transfer learning, provides a scalable and reliable pathway for training GNN surrogates on massive PDE-governed systems.
- Score: 34.15484094708584
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate yet efficient surrogate models are essential for large-scale simulations of partial differential equations (PDEs), particularly for uncertainty quantification (UQ) tasks that demand hundreds or thousands of evaluations. We develop a physics-inspired graph neural network (GNN) surrogate that operates directly on unstructured meshes and leverages the flexibility of graph attention. To improve both training efficiency and generalization properties of the model, we introduce a domain decomposition (DD) strategy that partitions the mesh into subdomains, trains local GNN surrogates in parallel, and aggregates their predictions. We then employ transfer learning to fine-tune models across subdomains, accelerating training and improving accuracy in data-limited settings. Applied to ice sheet simulations, our approach accurately predicts full-field velocities on high-resolution meshes, substantially reduces training time relative to training a single global surrogate model, and provides a ripe foundation for UQ objectives. Our results demonstrate that graph-based DD, combined with transfer learning, provides a scalable and reliable pathway for training GNN surrogates on massive PDE-governed systems, with broad potential for application beyond ice sheet dynamics.
Related papers
- ScaleDL: Towards Scalable and Efficient Runtime Prediction for Distributed Deep Learning Workloads [14.876533021201539]
ScaleDL is a novel runtime prediction framework for deep neural networks (DNNs)<n>It combines nonlinear layer-wise modeling with graph neural network (GNN)-based cross-layer interaction mechanism.<n>Experiments show that ScaleDL enhances runtime prediction accuracy and generalizability, achieving 6 times lower MRE and 5 times lower RMSE compared to baseline models.
arXiv Detail & Related papers (2025-11-06T08:05:55Z) - Test-time GNN Model Evaluation on Dynamic Graphs [52.31268996286955]
We propose a Dynamic Graph neural network Evaluator, dubbed DyGEval, to address this new problem.<n>The proposed DyGEval involves a two-stage framework: (1) test-time dynamic graph simulation, which captures the training-test distributional differences as supervision signals and trains an evaluator; and (2) DyGEval development and training, which accurately estimates the performance of the well-trained DGNN model on the test-time dynamic graphs.
arXiv Detail & Related papers (2025-09-28T11:40:37Z) - A Graph Laplacian Eigenvector-based Pre-training Method for Graph Neural Networks [7.359145401513628]
Structure-based pre-training methods are under-explored yet crucial for downstream applications which rely on underlying graph structure.<n>We propose the Laplacian Eigenvector Learning Module (LELM), a novel pre-training module for graph neural networks (GNNs) based on predicting the low-frequency eigenvectors of the graph Laplacian.<n>LELM introduces a novel architecture that overcomes oversmoothing, allowing the GNN model to learn long-range interdependencies.
arXiv Detail & Related papers (2025-09-02T20:07:20Z) - Recurrent U-Net-Based Graph Neural Network (RUGNN) for Accurate Deformation Predictions in Sheet Material Forming [13.180335574191432]
This study developed a new graph neural network surrogate model named Recurrent U Net-based Graph Neural Network (RUGNN)<n>The RUGNN model can achieve accurate predictions of sheet material deformation fields across multiple forming timesteps.
arXiv Detail & Related papers (2025-07-10T08:14:18Z) - Graph Data Selection for Domain Adaptation: A Model-Free Approach [54.27731120381295]
Graph domain adaptation (GDA) is a fundamental task in graph machine learning.<n>We propose a novel model-free framework, GRADATE, that selects the best training data from the source domain for the classification task on the target domain.<n>We show GRADATE outperforms existing selection methods and enhances off-the-shelf GDA methods with much fewer training data.
arXiv Detail & Related papers (2025-05-22T21:18:39Z) - A Multi-Fidelity Graph U-Net Model for Accelerated Physics Simulations [1.2430809884830318]
We propose a novel GNN architecture, Multi-Fidelity U-Net, that utilizes the advantages of the multi-fidelity methods for enhancing the performance of the GNN model.<n>We show that the proposed approach performs significantly better in accuracy and data requirement.<n>We also present Multi-Fidelity U-Net Lite, a faster version of the proposed architecture, with 35% faster training, with 2 to 5% reduction in accuracy.
arXiv Detail & Related papers (2024-12-19T20:09:38Z) - Mean flow data assimilation using physics-constrained Graph Neural Networks [0.0]
This study introduces a novel data assimilation approach that integrates Graph Neural Networks (GNNs) with optimisation techniques to enhance the accuracy of mean flow reconstruction.<n>The GNN framework is well-suited for handling unstructured data, which is common in the complex geometries encountered in Computational Fluid Dynamics (CFD)<n>Results demonstrate significant improvements in the accuracy of mean flow reconstructions, even with limited training data, compared to analogous purely data-driven models.
arXiv Detail & Related papers (2024-11-14T14:31:52Z) - Spatiotemporal Graph Learning with Direct Volumetric Information Passing and Feature Enhancement [62.91536661584656]
We propose a dual-module framework, Cell-embedded and Feature-enhanced Graph Neural Network (aka, CeFeGNN) for learning.<n>We embed learnable cell attributions to the common node-edge message passing process, which better captures the spatial dependency of regional features.<n>Experiments on various PDE systems and one real-world dataset demonstrate that CeFeGNN achieves superior performance compared with other baselines.
arXiv Detail & Related papers (2024-09-26T16:22:08Z) - Dynamic Graph Unlearning: A General and Efficient Post-Processing Method via Gradient Transformation [24.20087360102464]
We study the dynamic graph unlearning for the first time and propose an effective, efficient, general, and post-processing method to implement DGNN unlearning.<n>Our method has the potential to handle future unlearning requests with significant performance gains.
arXiv Detail & Related papers (2024-05-23T10:26:18Z) - Interpretable A-posteriori Error Indication for Graph Neural Network Surrogate Models [0.0]
This work introduces an interpretability enhancement procedure for graph neural networks (GNNs)
The end result is an interpretable GNN model that isolates regions in physical space, corresponding to sub-graphs, that are intrinsically linked to the forecasting task.
The interpretable GNNs can also be used to identify, during inference, graph nodes that correspond to a majority of the anticipated forecasting error.
arXiv Detail & Related papers (2023-11-13T18:37:07Z) - Label Deconvolution for Node Representation Learning on Large-scale Attributed Graphs against Learning Bias [72.33336385797944]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias.<n>We show that LD significantly outperforms state-of-the-art methods on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation [49.44309457870649]
Layer-wise Feedback feedback (LFP) is a novel training principle for neural network-like predictors.<n>LFP decomposes a reward to individual neurons based on their respective contributions.<n>Our method then implements a greedy reinforcing approach helpful parts of the network and weakening harmful ones.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.