M2N: Mesh Movement Networks for PDE Solvers
- URL: http://arxiv.org/abs/2204.11188v1
- Date: Sun, 24 Apr 2022 04:23:31 GMT
- Title: M2N: Mesh Movement Networks for PDE Solvers
- Authors: Wenbin Song, Mingrui Zhang, Joseph G. Wallwork, Junpeng Gao, Zheng
Tian, Fanglei Sun, Matthew D. Piggott, Junqing Chen, Zuoqiang Shi, Xiang
Chen, Jun Wang
- Abstract summary: We present the first learning-based end-to-end mesh movement framework for PDE solvers.
Key requirements are alleviating mesh, boundary consistency, and generalization to mesh with different resolutions.
We validate our methods on stationary and time-dependent, linear and non-linear equations.
- Score: 17.35053721712421
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mainstream numerical Partial Differential Equation (PDE) solvers require
discretizing the physical domain using a mesh. Mesh movement methods aim to
improve the accuracy of the numerical solution by increasing mesh resolution
where the solution is not well-resolved, whilst reducing unnecessary resolution
elsewhere. However, mesh movement methods, such as the Monge-Ampere method,
require the solution of auxiliary equations, which can be extremely expensive
especially when the mesh is adapted frequently. In this paper, we propose to
our best knowledge the first learning-based end-to-end mesh movement framework
for PDE solvers. Key requirements of learning-based mesh movement methods are
alleviating mesh tangling, boundary consistency, and generalization to mesh
with different resolutions. To achieve these goals, we introduce the neural
spline model and the graph attention network (GAT) into our models
respectively. While the Neural-Spline based model provides more flexibility for
large deformation, the GAT based model can handle domains with more complicated
shapes and is better at performing delicate local deformation. We validate our
methods on stationary and time-dependent, linear and non-linear equations, as
well as regularly and irregularly shaped domains. Compared to the traditional
Monge-Ampere method, our approach can greatly accelerate the mesh adaptation
process, whilst achieving comparable numerical error reduction.
Related papers
- G-Adaptive mesh refinement -- leveraging graph neural networks and differentiable finite element solvers [21.82887690060956]
Mesh relocation (r-adaptivity) seeks to optimise the position of a fixed number of mesh points to obtain the best FE solution accuracy.
Recent machine learning approaches to r-adaptivity have mainly focused on the construction of fast surrogates for such classical methods.
Our new approach combines a graph neural network (GNN) powered architecture, with training based on direct minimisation of the FE solution error.
arXiv Detail & Related papers (2024-07-05T13:57:35Z) - Towards Universal Mesh Movement Networks [13.450178050669964]
We introduce the Universal Mesh Movement Network (UM2N)
UM2N can be applied in a non-intrusive, zero-shot manner to move meshes with different size distributions and structures.
We evaluate our method on advection and Navier-Stokes based examples, as well as a real-world tsunami simulation case.
arXiv Detail & Related papers (2024-06-29T09:35:12Z) - Better Neural PDE Solvers Through Data-Free Mesh Movers [13.013830215107735]
We develop a moving mesh based neural PDE solver (MM-PDE) that embeds the moving mesh with a two-branch architecture.
Our method generates suitable meshes and considerably enhances accuracy when modeling widely considered PDE systems.
arXiv Detail & Related papers (2023-12-09T14:05:28Z) - Learning Controllable Adaptive Simulation for Multi-resolution Physics [86.8993558124143]
We introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model.
LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNN-based actor-critic for learning the policy of spatial refinement and coarsening.
We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error.
arXiv Detail & Related papers (2023-05-01T23:20:27Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - Physics-constrained Unsupervised Learning of Partial Differential
Equations using Meshes [1.066048003460524]
Graph neural networks show promise in accurately representing irregularly meshed objects and learning their dynamics.
In this work, we represent meshes naturally as graphs, process these using Graph Networks, and formulate our physics-based loss to provide an unsupervised learning framework for partial differential equations (PDE)
Our framework will enable the application of PDE solvers in interactive settings, such as model-based control of soft-body deformations.
arXiv Detail & Related papers (2022-03-30T19:22:56Z) - Mesh Draping: Parametrization-Free Neural Mesh Transfer [92.55503085245304]
Mesh Draping is a neural method for transferring existing mesh structure from one shape to another.
We show that by leveraging gradually increasing frequencies to guide the neural optimization, we are able to achieve stable and high quality mesh transfer.
arXiv Detail & Related papers (2021-10-11T17:24:52Z) - Cogradient Descent for Dependable Learning [64.02052988844301]
We propose a dependable learning based on Cogradient Descent (CoGD) algorithm to address the bilinear optimization problem.
CoGD is introduced to solve bilinear problems when one variable is with sparsity constraint.
It can also be used to decompose the association of features and weights, which further generalizes our method to better train convolutional neural networks (CNNs)
arXiv Detail & Related papers (2021-06-20T04:28:20Z) - Graph Convolutional Networks for Model-Based Learning in Nonlinear
Inverse Problems [2.0999222360659604]
We present a flexible framework to extend model-based learning directly to nonuniform meshes.
This gives rise to the proposed iterative Graph Convolutional Newton's Method (GCNM)
We show that the GCNM has strong generalizability to different domain shapes.
arXiv Detail & Related papers (2021-03-28T14:19:56Z) - ResNet-LDDMM: Advancing the LDDMM Framework Using Deep Residual Networks [86.37110868126548]
In this work, we make use of deep residual neural networks to solve the non-stationary ODE (flow equation) based on a Euler's discretization scheme.
We illustrate these ideas on diverse registration problems of 3D shapes under complex topology-preserving transformations.
arXiv Detail & Related papers (2021-02-16T04:07:13Z) - GACEM: Generalized Autoregressive Cross Entropy Method for Multi-Modal
Black Box Constraint Satisfaction [69.94831587339539]
We present a modified Cross-Entropy Method (CEM) that uses a masked auto-regressive neural network for modeling uniform distributions over the solution space.
Our algorithm is able to express complicated solution spaces, thus allowing it to track a variety of different solution regions.
arXiv Detail & Related papers (2020-02-17T20:21:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.