Quasi-optimal $hp$-finite element refinements towards singularities via
deep neural network prediction
- URL: http://arxiv.org/abs/2209.05844v1
- Date: Tue, 13 Sep 2022 09:45:57 GMT
- Title: Quasi-optimal $hp$-finite element refinements towards singularities via
deep neural network prediction
- Authors: Tomasz Sluzalec, Rafal Grzeszczuk, Sergio Rojas, Witold Dzwinel,
Maciej Paszynski
- Abstract summary: We show how to construct the deep neural network expert to predict quasi-optimal $hp$-refinements for a given computational problem.
For the training, we use a two-grid paradigm self-adaptive $hp$-FEM algorithm.
We show that the exponential convergence delivered by the self-adaptive $hp$-FEM can be preserved if we continue refinements with a properly trained DNN expert.
- Score: 0.3149883354098941
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We show how to construct the deep neural network (DNN) expert to predict
quasi-optimal $hp$-refinements for a given computational problem. The main idea
is to train the DNN expert during executing the self-adaptive $hp$-finite
element method ($hp$-FEM) algorithm and use it later to predict further $hp$
refinements. For the training, we use a two-grid paradigm self-adaptive
$hp$-FEM algorithm. It employs the fine mesh to provide the optimal $hp$
refinements for coarse mesh elements. We aim to construct the DNN expert to
identify quasi-optimal $hp$ refinements of the coarse mesh elements. During the
training phase, we use the direct solver to obtain the solution for the fine
mesh to guide the optimal refinements over the coarse mesh element. After
training, we turn off the self-adaptive $hp$-FEM algorithm and continue with
quasi-optimal refinements as proposed by the DNN expert trained. We test our
method on three-dimensional Fichera and two-dimensional L-shaped domain
problems. We verify the convergence of the numerical accuracy with respect to
the mesh size. We show that the exponential convergence delivered by the
self-adaptive $hp$-FEM can be preserved if we continue refinements with a
properly trained DNN expert. Thus, in this paper, we show that from the
self-adaptive $hp$-FEM it is possible to train the DNN expert the location of
the singularities, and continue with the selection of the quasi-optimal $hp$
refinements, preserving the exponential convergence of the method.
Related papers
- Efficient k-Nearest-Neighbor Machine Translation with Dynamic Retrieval [49.825549809652436]
$k$NN-MT constructs an external datastore to store domain-specific translation knowledge.
adaptive retrieval ($k$NN-MT-AR) dynamically estimates $lambda$ and skips $k$NN retrieval if $lambda$ is less than a fixed threshold.
We propose dynamic retrieval ($k$NN-MT-DR) that significantly extends vanilla $k$NN-MT in two aspects.
arXiv Detail & Related papers (2024-06-10T07:36:55Z) - A multiobjective continuation method to compute the regularization path of deep neural networks [1.3654846342364308]
Sparsity is a highly feature in deep neural networks (DNNs) since it ensures numerical efficiency, improves the interpretability of models, and robustness.
We present an algorithm that allows for the entire sparse front for the above-mentioned objectives in a very efficient manner for high-dimensional gradients with millions of parameters.
We demonstrate that knowledge of the regularization path allows for a well-generalizing network parametrization.
arXiv Detail & Related papers (2023-08-23T10:08:52Z) - Neural Greedy Pursuit for Feature Selection [72.4121881681861]
We propose a greedy algorithm to select $N$ important features among $P$ input features for a non-linear prediction problem.
We use neural networks as predictors in the algorithm to compute the loss.
arXiv Detail & Related papers (2022-07-19T16:39:16Z) - Bounding the Width of Neural Networks via Coupled Initialization -- A
Worst Case Analysis [121.9821494461427]
We show how to significantly reduce the number of neurons required for two-layer ReLU networks.
We also prove new lower bounds that improve upon prior work, and that under certain assumptions, are best possible.
arXiv Detail & Related papers (2022-06-26T06:51:31Z) - Consistent Sparse Deep Learning: Theory and Computation [11.24471623055182]
We propose a frequentist-like method for learning sparse deep learning networks (DNNs)
The proposed method can perform very well for large-scale network compression and high-dimensional nonlinear variable selection.
arXiv Detail & Related papers (2021-02-25T23:31:24Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - Improve the Robustness and Accuracy of Deep Neural Network with
$L_{2,\infty}$ Normalization [0.0]
The robustness and accuracy of the deep neural network (DNN) was enhanced by introducing the $L_2,infty$ normalization.
It is proved that the $L_2,infty$ normalization leads to large dihedral angles between two adjacent faces of the polyhedron graph of the DNN function.
arXiv Detail & Related papers (2020-10-10T05:45:45Z) - Improving predictions of Bayesian neural nets via local linearization [79.21517734364093]
We argue that the Gauss-Newton approximation should be understood as a local linearization of the underlying Bayesian neural network (BNN)
Because we use this linearized model for posterior inference, we should also predict using this modified model instead of the original one.
We refer to this modified predictive as "GLM predictive" and show that it effectively resolves common underfitting problems of the Laplace approximation.
arXiv Detail & Related papers (2020-08-19T12:35:55Z) - Improving the Backpropagation Algorithm with Consequentialism Weight
Updates over Mini-Batches [0.40611352512781856]
We show that it is possible to consider a multi-layer neural network as a stack of adaptive filters.
We introduce a better algorithm by predicting then emending the adverse consequences of the actions that take place in BP even before they happen.
Our experiments show the usefulness of our algorithm in the training of deep neural networks.
arXiv Detail & Related papers (2020-03-11T08:45:36Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.