3DMeshNet: A Three-Dimensional Differential Neural Network for Structured Mesh Generation
- URL: http://arxiv.org/abs/2407.01560v1
- Date: Tue, 7 May 2024 13:07:07 GMT
- Title: 3DMeshNet: A Three-Dimensional Differential Neural Network for Structured Mesh Generation
- Authors: Jiaming Peng, Xinhai Chen, Jie Liu,
- Abstract summary: We propose a novel method, 3DMeshNet, for three-dimensional structured mesh generation.
3DMeshNet embeds the meshing-related differential equations into the loss function of neural networks.
It can efficiently output a three-dimensional structured mesh with a user-defined number of quadrilateral/hexahedral cells.
- Score: 2.892556380266997
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mesh generation is a crucial step in numerical simulations, significantly impacting simulation accuracy and efficiency. However, generating meshes remains time-consuming and requires expensive computational resources. In this paper, we propose a novel method, 3DMeshNet, for three-dimensional structured mesh generation. The method embeds the meshing-related differential equations into the loss function of neural networks, formulating the meshing task as an unsupervised optimization problem. It takes geometric points as input to learn the potential mapping between parametric and computational domains. After suitable offline training, 3DMeshNet can efficiently output a three-dimensional structured mesh with a user-defined number of quadrilateral/hexahedral cells through the feed-forward neural prediction. To enhance training stability and accelerate convergence, we integrate loss function reweighting through weight adjustments and gradient projection alongside applying finite difference methods to streamline derivative computations in the loss. Experiments on different cases show that 3DMeshNet is robust and fast. It outperforms neural network-based methods and yields superior meshes compared to traditional mesh partitioning methods. 3DMeshNet significantly reduces training times by up to 85% compared to other neural network-based approaches and lowers meshing overhead by 4 to 8 times relative to traditional meshing methods.
Related papers
- Concurrent Training and Layer Pruning of Deep Neural Networks [0.0]
We propose an algorithm capable of identifying and eliminating irrelevant layers of a neural network during the early stages of training.
We employ a structure using residual connections around nonlinear network sections that allow the flow of information through the network once a nonlinear section is pruned.
arXiv Detail & Related papers (2024-06-06T23:19:57Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Scalable Bayesian Inference in the Era of Deep Learning: From Gaussian Processes to Deep Neural Networks [0.5827521884806072]
Large neural networks trained on large datasets have become the dominant paradigm in machine learning.
This thesis develops scalable methods to equip neural networks with model uncertainty.
arXiv Detail & Related papers (2024-04-29T23:38:58Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - An Improved Structured Mesh Generation Method Based on Physics-informed
Neural Networks [13.196871939441273]
As numerical algorithms become more efficient and computers become more powerful, the percentage of time devoted to mesh generation becomes higher.
In this paper, we present an improved structured mesh generation method.
The method formulates the meshing problem as a global optimization problem related to a physics-informed neural network.
arXiv Detail & Related papers (2022-10-18T02:45:14Z) - NeuralMeshing: Differentiable Meshing of Implicit Neural Representations [63.18340058854517]
We propose a novel differentiable meshing algorithm for extracting surface meshes from neural implicit representations.
Our method produces meshes with regular tessellation patterns and fewer triangle faces compared to existing methods.
arXiv Detail & Related papers (2022-10-05T16:52:25Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - $S^3$: Sign-Sparse-Shift Reparametrization for Effective Training of
Low-bit Shift Networks [41.54155265996312]
Shift neural networks reduce complexity by removing expensive multiplication operations and quantizing continuous weights into low-bit discrete values.
Our proposed training method pushes the boundaries of shift neural networks and shows 3-bit shift networks out-performs their full-precision counterparts in terms of top-1 accuracy on ImageNet.
arXiv Detail & Related papers (2021-07-07T19:33:02Z) - Hessian Aware Quantization of Spiking Neural Networks [1.90365714903665]
Neuromorphic architecture allows massively parallel computation with variable and local bit-precisions.
Current gradient based methods of SNN training use a complex neuron model with multiple state variables.
We present a simplified neuron model that reduces the number of state variables by 4-fold while still being compatible with gradient based training.
arXiv Detail & Related papers (2021-04-29T05:27:34Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.