Better Neural PDE Solvers Through Data-Free Mesh Movers
- URL: http://arxiv.org/abs/2312.05583v2
- Date: Mon, 19 Feb 2024 14:41:30 GMT
- Title: Better Neural PDE Solvers Through Data-Free Mesh Movers
- Authors: Peiyan Hu, Yue Wang, Zhi-Ming Ma
- Abstract summary: We develop a moving mesh based neural PDE solver (MM-PDE) that embeds the moving mesh with a two-branch architecture.
Our method generates suitable meshes and considerably enhances accuracy when modeling widely considered PDE systems.
- Score: 13.013830215107735
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, neural networks have been extensively employed to solve partial
differential equations (PDEs) in physical system modeling. While major studies
focus on learning system evolution on predefined static mesh discretizations,
some methods utilize reinforcement learning or supervised learning techniques
to create adaptive and dynamic meshes, due to the dynamic nature of these
systems. However, these approaches face two primary challenges: (1) the need
for expensive optimal mesh data, and (2) the change of the solution space's
degree of freedom and topology during mesh refinement. To address these
challenges, this paper proposes a neural PDE solver with a neural mesh adapter.
To begin with, we introduce a novel data-free neural mesh adaptor, called
Data-free Mesh Mover (DMM), with two main innovations. Firstly, it is an
operator that maps the solution to adaptive meshes and is trained using the
Monge-Amp\`ere equation without optimal mesh data. Secondly, it dynamically
changes the mesh by moving existing nodes rather than adding or deleting nodes
and edges. Theoretical analysis shows that meshes generated by DMM have the
lowest interpolation error bound. Based on DMM, to efficiently and accurately
model dynamic systems, we develop a moving mesh based neural PDE solver
(MM-PDE) that embeds the moving mesh with a two-branch architecture and a
learnable interpolation framework to preserve information within the data.
Empirical experiments demonstrate that our method generates suitable meshes and
considerably enhances accuracy when modeling widely considered PDE systems. The
code can be found at: https://github.com/Peiyannn/MM-PDE.git.
Related papers
- A Nonoverlapping Domain Decomposition Method for Extreme Learning Machines: Elliptic Problems [0.0]
Extreme learning machine (ELM) is a methodology for solving partial differential equations (PDEs) using a single hidden layer feed-forward neural network.
In this paper, we propose a nonoverlapping domain decomposition method (DDM) for ELMs that not only reduces the training time of ELMs, but is also suitable for parallel computation.
arXiv Detail & Related papers (2024-06-22T23:25:54Z) - Iterative Sizing Field Prediction for Adaptive Mesh Generation From Expert Demonstrations [49.173541207550485]
Adaptive Meshing By Expert Reconstruction (AMBER) is an imitation learning problem.
AMBER combines a graph neural network with an online data acquisition scheme to predict the projected sizing field of an expert mesh.
We experimentally validate AMBER on 2D meshes and 3D meshes provided by a human expert, closely matching the provided demonstrations and outperforming a single-step CNN baseline.
arXiv Detail & Related papers (2024-06-20T10:01:22Z) - Training Deep Surrogate Models with Large Scale Online Learning [48.7576911714538]
Deep learning algorithms have emerged as a viable alternative for obtaining fast solutions for PDEs.
Models are usually trained on synthetic data generated by solvers, stored on disk and read back for training.
It proposes an open source online training framework for deep surrogate models.
arXiv Detail & Related papers (2023-06-28T12:02:27Z) - Learning Controllable Adaptive Simulation for Multi-resolution Physics [86.8993558124143]
We introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model.
LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNN-based actor-critic for learning the policy of spatial refinement and coarsening.
We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error.
arXiv Detail & Related papers (2023-05-01T23:20:27Z) - Accelerated Solutions of Coupled Phase-Field Problems using Generative
Adversarial Networks [0.0]
We develop a new neural network based framework that uses encoder-decoder based conditional GeneLSTM layers to solve a system of Cahn-Hilliard microstructural equations.
We show that the trained models are mesh and scale-independent, thereby warranting application as effective neural operators.
arXiv Detail & Related papers (2022-11-22T08:32:22Z) - M2N: Mesh Movement Networks for PDE Solvers [17.35053721712421]
We present the first learning-based end-to-end mesh movement framework for PDE solvers.
Key requirements are alleviating mesh, boundary consistency, and generalization to mesh with different resolutions.
We validate our methods on stationary and time-dependent, linear and non-linear equations.
arXiv Detail & Related papers (2022-04-24T04:23:31Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z) - Accurate and efficient Simulation of very high-dimensional Neural Mass
Models with distributed-delay Connectome Tensors [0.23453441553817037]
This paper introduces methods that efficiently integrates any high-dimensional Neural Mass Models (NMMs) specified by two essential components.
The first is the set of nonlinear Random Differential Equations of the dynamics of each neural mass.
The second is the highly sparse three-dimensional Connectome (CT) that encodes the strength of the connections and the delays of information transfer along the axons of each connection.
arXiv Detail & Related papers (2020-09-16T05:55:17Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.