Error-Correcting Neural Networks for Two-Dimensional Curvature
Computation in the Level-Set Method
- URL: http://arxiv.org/abs/2201.12342v1
- Date: Sat, 22 Jan 2022 05:14:40 GMT
- Title: Error-Correcting Neural Networks for Two-Dimensional Curvature
Computation in the Level-Set Method
- Authors: Luis \'Angel Larios-C\'ardenas and Fr\'ed\'eric Gibou
- Abstract summary: We present an error-neural-modeling-based strategy for approximating two-dimensional curvature in the level-set method.
Our main contribution is a redesigned hybrid solver that relies on numerical schemes to enable machine-learning operations on demand.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present an error-neural-modeling-based strategy for approximating
two-dimensional curvature in the level-set method. Our main contribution is a
redesigned hybrid solver (Larios-C\'{a}rdenas and Gibou (2021)[1]) that relies
on numerical schemes to enable machine-learning operations on demand. In
particular, our routine features double predicting to harness curvature
symmetry invariance in favor of precision and stability. As in [1], the core of
this solver is a multilayer perceptron trained on circular- and
sinusoidal-interface samples. Its role is to quantify the error in numerical
curvature approximations and emit corrected estimates for select grid vertices
along the free boundary. These corrections arise in response to preprocessed
context level-set, curvature, and gradient data. To promote neural capacity, we
have adopted sample negative-curvature normalization, reorientation, and
reflection-based augmentation. In the same manner, our system incorporates
dimensionality reduction, well-balancedness, and regularization to minimize
outlying effects. Our training approach is likewise scalable across mesh sizes.
For this purpose, we have introduced dimensionless parametrization and
probabilistic subsampling during data production. Together, all these elements
have improved the accuracy and efficiency of curvature calculations around
under-resolved regions. In most experiments, our strategy has outperformed the
numerical baseline at twice the number of redistancing steps while requiring
only a fraction of the cost.
Related papers
- Trust-Region Sequential Quadratic Programming for Stochastic Optimization with Random Models [57.52124921268249]
We propose a Trust Sequential Quadratic Programming method to find both first and second-order stationary points.
To converge to first-order stationary points, our method computes a gradient step in each iteration defined by minimizing a approximation of the objective subject.
To converge to second-order stationary points, our method additionally computes an eigen step to explore the negative curvature the reduced Hessian matrix.
arXiv Detail & Related papers (2024-09-24T04:39:47Z) - Implicit Bias in Leaky ReLU Networks Trained on High-Dimensional Data [63.34506218832164]
In this work, we investigate the implicit bias of gradient flow and gradient descent in two-layer fully-connected neural networks with ReLU activations.
For gradient flow, we leverage recent work on the implicit bias for homogeneous neural networks to show that leakyally, gradient flow produces a neural network with rank at most two.
For gradient descent, provided the random variance is small enough, we show that a single step of gradient descent suffices to drastically reduce the rank of the network, and that the rank remains small throughout training.
arXiv Detail & Related papers (2022-10-13T15:09:54Z) - Machine learning algorithms for three-dimensional mean-curvature
computation in the level-set method [0.0]
We propose a data-driven mean-curvature solver for the level-set method.
Our proposed system can yield more accurate mean-curvature estimations than modern particle-based interface reconstruction.
arXiv Detail & Related papers (2022-08-18T20:19:22Z) - Level-Set Curvature Neural Networks: A Hybrid Approach [0.0]
We present a hybrid strategy based on deep learning to compute mean curvature in the level-set method.
The proposed inference system combines a dictionary of improved regression models with standard numerical schemes to estimate curvature more accurately.
Our findings confirm that machine learning is a promising venue for devising viable solutions to the level-set method's numerical shortcomings.
arXiv Detail & Related papers (2021-04-07T06:51:52Z) - Cogradient Descent for Bilinear Optimization [124.45816011848096]
We introduce a Cogradient Descent algorithm (CoGD) to address the bilinear problem.
We solve one variable by considering its coupling relationship with the other, leading to a synchronous gradient descent.
Our algorithm is applied to solve problems with one variable under the sparsity constraint.
arXiv Detail & Related papers (2020-06-16T13:41:54Z) - Learning Rates as a Function of Batch Size: A Random Matrix Theory
Approach to Neural Network Training [2.9649783577150837]
We study the effect of mini-batching on the loss landscape of deep neural networks using spiked, field-dependent random matrix theory.
We derive analytical expressions for the maximal descent and adaptive training regimens for smooth, non-Newton deep neural networks.
We validate our claims on the VGG/ResNet and ImageNet datasets.
arXiv Detail & Related papers (2020-06-16T11:55:45Z) - Implicit Bias of Gradient Descent for Mean Squared Error Regression with
Two-Layer Wide Neural Networks [1.3706331473063877]
We show that the solution of training a width-$n$ shallow ReLU network is within $n- 1/2$ of the function which fits the training data.
We also show that the training trajectories are captured by trajectories of smoothing splines with decreasing regularization strength.
arXiv Detail & Related papers (2020-06-12T17:46:40Z) - Path Sample-Analytic Gradient Estimators for Stochastic Binary Networks [78.76880041670904]
In neural networks with binary activations and or binary weights the training by gradient descent is complicated.
We propose a new method for this estimation problem combining sampling and analytic approximation steps.
We experimentally show higher accuracy in gradient estimation and demonstrate a more stable and better performing training in deep convolutional models.
arXiv Detail & Related papers (2020-06-04T21:51:21Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z) - Semiparametric Nonlinear Bipartite Graph Representation Learning with
Provable Guarantees [106.91654068632882]
We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution.
We show that the proposed objective is strongly convex in a neighborhood around the ground truth, so that a gradient descent-based method achieves linear convergence rate.
Our estimator is robust to any model misspecification within the exponential family, which is validated in extensive experiments.
arXiv Detail & Related papers (2020-03-02T16:40:36Z) - A deep learning approach for the computation of curvature in the
level-set method [0.0]
We propose a strategy to estimate the mean curvature of two-dimensional implicit in the level-set method.
Our approach is based on fitting feed-forward neural networks to synthetic data sets constructed from circular immersed in uniform grids of various resolutions.
arXiv Detail & Related papers (2020-02-04T00:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.