Graph Neural Networks as Fast and High-fidelity Emulators for
Finite-Element Ice Sheet Modeling
- URL: http://arxiv.org/abs/2402.05291v1
- Date: Wed, 7 Feb 2024 22:10:36 GMT
- Title: Graph Neural Networks as Fast and High-fidelity Emulators for
Finite-Element Ice Sheet Modeling
- Authors: Maryam Rahnemoonfar, Younghyun Koo
- Abstract summary: We develop graph neural networks (GNNs) as fast surrogate models to preserve the finite element structure of the Ice-sheet and Sea-level System Model (ISSM)
GNNs reproduce ice thickness and velocity with better accuracy than the classic convolutional neural network (CNN) and multi-layer perception (MLP)
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although the finite element approach of the Ice-sheet and Sea-level System
Model (ISSM) solves ice dynamics problems governed by Stokes equations quickly
and accurately, such numerical modeling requires intensive computation on
central processing units (CPU). In this study, we develop graph neural networks
(GNN) as fast surrogate models to preserve the finite element structure of
ISSM. Using the 20-year transient simulations in the Pine Island Glacier (PIG),
we train and test three GNNs: graph convolutional network (GCN), graph
attention network (GAT), and equivariant graph convolutional network (EGCN).
These GNNs reproduce ice thickness and velocity with better accuracy than the
classic convolutional neural network (CNN) and multi-layer perception (MLP). In
particular, GNNs successfully capture the ice mass loss and acceleration
induced by higher basal melting rates in the PIG. When our GNN emulators are
implemented on graphic processing units (GPUs), they show up to 50 times faster
computational time than the CPU-based ISSM simulation.
Related papers
- Scalable Mechanistic Neural Networks [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
By reformulating the original Mechanistic Neural Network (MNN) we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - Graph Neural Network as Computationally Efficient Emulator of Ice-sheet and Sea-level System Model (ISSM) [0.0]
We design a graph convolutional network (GCN) as a fast emulator for the Ice-sheet and Sea-level System Model (ISSM)
GCN shows 34 times faster computational speed than the CPU-based ISSM modeling.
arXiv Detail & Related papers (2024-06-26T16:13:11Z) - Graph Neural Networks for Emulation of Finite-Element Ice Dynamics in Greenland and Antarctic Ice Sheets [0.0]
equivariant graph convolutional network (EGCN) is an emulator for the ice sheet dynamics modeling.
EGCN reproduces ice thickness and velocity changes in the Helheim Glacier, Greenland, and Pine Island Glacier, Antarctica, with 260 times and 44 times faster computation time, respectively.
arXiv Detail & Related papers (2024-06-26T15:18:49Z) - CNN2GNN: How to Bridge CNN with GNN [59.42117676779735]
We propose a novel CNN2GNN framework to unify CNN and GNN together via distillation.
The performance of distilled boosted'' two-layer GNN on Mini-ImageNet is much higher than CNN containing dozens of layers such as ResNet152.
arXiv Detail & Related papers (2024-04-23T08:19:08Z) - Continuous Spiking Graph Neural Networks [43.28609498855841]
Continuous graph neural networks (CGNNs) have garnered significant attention due to their ability to generalize existing discrete graph neural networks (GNNs)
We introduce the high-order structure of COS-GNN, which utilizes the second-order ODE for spiking representation and continuous propagation.
We provide the theoretical proof that COS-GNN effectively mitigates the issues of exploding and vanishing gradients, enabling us to capture long-range dependencies between nodes.
arXiv Detail & Related papers (2024-04-02T12:36:40Z) - Learning CO$_2$ plume migration in faulted reservoirs with Graph Neural
Networks [0.3914676152740142]
We develop a graph-based neural model for capturing the impact of faults on CO$$ plume migration.
We demonstrate that our approach can accurately predict the temporal evolution of gas saturation and pore pressure in a synthetic reservoir with faults.
This work highlights the potential of GNN-based methods to accurately and rapidly model subsurface flow with complex faults and fractures.
arXiv Detail & Related papers (2023-06-16T06:47:47Z) - A Finite Element-Inspired Hypergraph Neural Network: Application to
Fluid Dynamics Simulations [4.984601297028257]
An emerging trend in deep learning research focuses on the applications of graph neural networks (GNNs) for continuum mechanics simulations.
We present a method to construct a hypergraph by connecting the nodes by elements rather than edges.
We term this method a finite element-inspired hypergraph neural network, in short FEIH($phi$)-GNN.
arXiv Detail & Related papers (2022-12-30T04:10:01Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.