An Improved Structured Mesh Generation Method Based on Physics-informed
Neural Networks
- URL: http://arxiv.org/abs/2210.09546v1
- Date: Tue, 18 Oct 2022 02:45:14 GMT
- Title: An Improved Structured Mesh Generation Method Based on Physics-informed
Neural Networks
- Authors: Xinhai Chen, Jie Liu, Junjun Yan, Zhichao Wang, Chunye Gong
- Abstract summary: As numerical algorithms become more efficient and computers become more powerful, the percentage of time devoted to mesh generation becomes higher.
In this paper, we present an improved structured mesh generation method.
The method formulates the meshing problem as a global optimization problem related to a physics-informed neural network.
- Score: 13.196871939441273
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mesh generation remains a key technology in many areas where numerical
simulations are required. As numerical algorithms become more efficient and
computers become more powerful, the percentage of time devoted to mesh
generation becomes higher. In this paper, we present an improved structured
mesh generation method. The method formulates the meshing problem as a global
optimization problem related to a physics-informed neural network. The mesh is
obtained by intelligently solving the physical boundary-constrained partial
differential equations. To improve the prediction accuracy of the neural
network, we also introduce a novel auxiliary line strategy and an efficient
network model during meshing. The strategy first employs a priori auxiliary
lines to provide ground truth data and then uses these data to construct a loss
term to better constrain the convergence of the subsequent training. The
experimental results indicate that the proposed method is effective and robust.
It can accurately approximate the mapping (transformation) from the
computational domain to the physical domain and enable fast high-quality
structured mesh generation.
Related papers
- MeshONet: A Generalizable and Efficient Operator Learning Method for Structured Mesh Generation [14.124041386580481]
MeshONet is the first generalizable intelligent learning method for structured mesh generation.
It achieves a speedup of up to four orders of magnitude in generation efficiency over traditional methods.
arXiv Detail & Related papers (2025-01-21T07:27:05Z) - Reconstructing Deep Neural Networks: Unleashing the Optimization Potential of Natural Gradient Descent [12.00557940490703]
We propose a novel optimization method for training deep neural networks called structured natural gradient descent (SNGD)
Our proposed method has the potential to significantly improve the scalability and efficiency of NGD in deep learning applications.
arXiv Detail & Related papers (2024-12-10T11:57:47Z) - Concurrent Training and Layer Pruning of Deep Neural Networks [0.0]
We propose an algorithm capable of identifying and eliminating irrelevant layers of a neural network during the early stages of training.
We employ a structure using residual connections around nonlinear network sections that allow the flow of information through the network once a nonlinear section is pruned.
arXiv Detail & Related papers (2024-06-06T23:19:57Z) - Improving Generalization of Deep Neural Networks by Optimum Shifting [33.092571599896814]
We propose a novel method called emphoptimum shifting, which changes the parameters of a neural network from a sharp minimum to a flatter one.
Our method is based on the observation that when the input and output of a neural network are fixed, the matrix multiplications within the network can be treated as systems of under-determined linear equations.
arXiv Detail & Related papers (2024-05-23T02:31:55Z) - 3DMeshNet: A Three-Dimensional Differential Neural Network for Structured Mesh Generation [2.892556380266997]
We propose a novel method, 3DMeshNet, for three-dimensional structured mesh generation.
3DMeshNet embeds the meshing-related differential equations into the loss function of neural networks.
It can efficiently output a three-dimensional structured mesh with a user-defined number of quadrilateral/hexahedral cells.
arXiv Detail & Related papers (2024-05-07T13:07:07Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - MultiScale MeshGraphNets [65.26373813797409]
We propose two complementary approaches to improve the framework from MeshGraphNets.
First, we demonstrate that it is possible to learn accurate surrogate dynamics of a high-resolution system on a much coarser mesh.
Second, we introduce a hierarchical approach (MultiScale MeshGraphNets) which passes messages on two different resolutions.
arXiv Detail & Related papers (2022-10-02T20:16:20Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Physics informed neural networks for continuum micromechanics [68.8204255655161]
Recently, physics informed neural networks have successfully been applied to a broad variety of problems in applied mathematics and engineering.
Due to the global approximation, physics informed neural networks have difficulties in displaying localized effects and strong non-linear solutions by optimization.
It is shown, that the domain decomposition approach is able to accurately resolve nonlinear stress, displacement and energy fields in heterogeneous microstructures obtained from real-world $mu$CT-scans.
arXiv Detail & Related papers (2021-10-14T14:05:19Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.