G-Adaptivity: optimised graph-based mesh relocation for finite element methods
- URL: http://arxiv.org/abs/2407.04516v2
- Date: Thu, 06 Feb 2025 09:58:00 GMT
- Title: G-Adaptivity: optimised graph-based mesh relocation for finite element methods
- Authors: James Rowbottom, Georg Maierhofer, Teo Deveney, Eike Mueller, Alberto Paganini, Katharina Schratz, Pietro Liò, Carola-Bibiane Schönlieb, Chris Budd,
- Abstract summary: Mesh relocation (r-adaptivity) seeks to optimise the mesh geometry to obtain the best solution accuracy at given computational budget.
Recent machine learning approaches have focused on the construction of fast surrogates for such classical methods.
We present a novel, and effective, approach to achieve optimal mesh relocation in finite element methods (FEMs)
- Score: 20.169049222190853
- License:
- Abstract: We present a novel, and effective, approach to achieve optimal mesh relocation in finite element methods (FEMs). The cost and accuracy of FEMs is critically dependent on the choice of mesh points. Mesh relocation (r-adaptivity) seeks to optimise the mesh geometry to obtain the best solution accuracy at given computational budget. Classical r-adaptivity relies on the solution of a separate nonlinear "meshing" PDE to determine mesh point locations. This incurs significant cost at remeshing, and relies on estimates that relate interpolation- and FEM-error. Recent machine learning approaches have focused on the construction of fast surrogates for such classical methods. Instead, our new approach trains a graph neural network (GNN) to determine mesh point locations by directly minimising the FE solution error from the PDE system Firedrake to achieve higher solution accuracy. Our GNN architecture closely aligns the mesh solution space to that of classical meshing methodologies, thus replacing classical estimates for optimality with a learnable strategy. This allows for rapid and robust training and results in an extremely efficient and effective GNN approach to online r-adaptivity. Our method outperforms both classical, and prior ML, approaches to r-adaptive meshing. In particular, it achieves lower FE solution error, whilst retaining the significant speed-up over classical methods observed in prior ML work.
Related papers
- An Adaptive Collocation Point Strategy For Physics Informed Neural Networks via the QR Discrete Empirical Interpolation Method [1.2289361708127877]
We propose an adaptive collocation point selection strategy utilizing the QR Discrete Empirical Interpolation Method (QR-DEIM)
Our results on benchmark PDEs, including the wave, Allen-Cahn, and Burgers' equations, demonstrate that our QR-DEIM-based approach improves PINN accuracy compared to existing methods.
arXiv Detail & Related papers (2025-01-13T21:24:15Z) - GDSG: Graph Diffusion-based Solution Generator for Optimization Problems in MEC Networks [109.17835015018532]
We present a Graph Diffusion-based Solution Generation (GDSG) method.
This approach is designed to work with suboptimal datasets while converging to the optimal solution large probably.
We build GDSG as a multi-task diffusion model utilizing a Graph Neural Network (GNN) to acquire the distribution of high-quality solutions.
arXiv Detail & Related papers (2024-12-11T11:13:43Z) - PACMANN: Point Adaptive Collocation Method for Artificial Neural Networks [44.99833362998488]
PINNs minimize a loss function which includes the PDE residual determined for a set of collocation points.
Previous work has shown that the number and distribution of these collocation points have a significant influence on the accuracy of the PINN solution.
We present the Point Adaptive Collocation Method for Artificial Neural Networks (PACMANN)
arXiv Detail & Related papers (2024-11-29T11:31:11Z) - Achieving Constraints in Neural Networks: A Stochastic Augmented
Lagrangian Approach [49.1574468325115]
Regularizing Deep Neural Networks (DNNs) is essential for improving generalizability and preventing overfitting.
We propose a novel approach to DNN regularization by framing the training process as a constrained optimization problem.
We employ the Augmented Lagrangian (SAL) method to achieve a more flexible and efficient regularization mechanism.
arXiv Detail & Related papers (2023-10-25T13:55:35Z) - DMF-TONN: Direct Mesh-free Topology Optimization using Neural Networks [4.663709549795511]
We propose a direct mesh-free method for performing topology optimization by integrating a density field approximation neural network with a displacement field approximation neural network.
We show that this direct integration approach can give comparable results to conventional topology optimization techniques.
arXiv Detail & Related papers (2023-05-06T18:04:51Z) - Adaptive Self-supervision Algorithms for Physics-informed Neural
Networks [59.822151945132525]
Physics-informed neural networks (PINNs) incorporate physical knowledge from the problem domain as a soft constraint on the loss function.
We study the impact of the location of the collocation points on the trainability of these models.
We propose a novel adaptive collocation scheme which progressively allocates more collocation points to areas where the model is making higher errors.
arXiv Detail & Related papers (2022-07-08T18:17:06Z) - M2N: Mesh Movement Networks for PDE Solvers [17.35053721712421]
We present the first learning-based end-to-end mesh movement framework for PDE solvers.
Key requirements are alleviating mesh, boundary consistency, and generalization to mesh with different resolutions.
We validate our methods on stationary and time-dependent, linear and non-linear equations.
arXiv Detail & Related papers (2022-04-24T04:23:31Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - A hybrid MGA-MSGD ANN training approach for approximate solution of
linear elliptic PDEs [0.0]
We introduce a hybrid "Modified Genetic-Multilevel Gradient Descent" (MGA-MSGD) training algorithm.
It considerably improves accuracy and efficiency of solving 3D mechanical problems described, in strong-form, by PDEs via ANNs.
arXiv Detail & Related papers (2020-12-18T10:59:07Z) - Optimizing Mode Connectivity via Neuron Alignment [84.26606622400423]
Empirically, the local minima of loss functions can be connected by a learned curve in model space along which the loss remains nearly constant.
We propose a more general framework to investigate effect of symmetry on landscape connectivity by accounting for the weight permutations of networks being connected.
arXiv Detail & Related papers (2020-09-05T02:25:23Z) - Improving predictions of Bayesian neural nets via local linearization [79.21517734364093]
We argue that the Gauss-Newton approximation should be understood as a local linearization of the underlying Bayesian neural network (BNN)
Because we use this linearized model for posterior inference, we should also predict using this modified model instead of the original one.
We refer to this modified predictive as "GLM predictive" and show that it effectively resolves common underfitting problems of the Laplace approximation.
arXiv Detail & Related papers (2020-08-19T12:35:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.