Neural Gradient Learning and Optimization for Oriented Point Normal
Estimation
- URL: http://arxiv.org/abs/2309.09211v1
- Date: Sun, 17 Sep 2023 08:35:11 GMT
- Title: Neural Gradient Learning and Optimization for Oriented Point Normal
Estimation
- Authors: Qing Li, Huifang Feng, Kanle Shi, Yi Fang, Yu-Shen Liu, Zhizhong Han
- Abstract summary: We propose a deep learning approach to learn gradient vectors with consistent orientation from 3D point clouds for normal estimation.
We learn an angular distance field based on local plane geometry to refine the coarse gradient vectors.
Our method efficiently conducts global gradient approximation while achieving better accuracy and ability generalization of local feature description.
- Score: 53.611206368815125
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose Neural Gradient Learning (NGL), a deep learning approach to learn
gradient vectors with consistent orientation from 3D point clouds for normal
estimation. It has excellent gradient approximation properties for the
underlying geometry of the data. We utilize a simple neural network to
parameterize the objective function to produce gradients at points using a
global implicit representation. However, the derived gradients usually drift
away from the ground-truth oriented normals due to the lack of local detail
descriptions. Therefore, we introduce Gradient Vector Optimization (GVO) to
learn an angular distance field based on local plane geometry to refine the
coarse gradient vectors. Finally, we formulate our method with a two-phase
pipeline of coarse estimation followed by refinement. Moreover, we integrate
two weighting functions, i.e., anisotropic kernel and inlier score, into the
optimization to improve the robust and detail-preserving performance. Our
method efficiently conducts global gradient approximation while achieving
better accuracy and generalization ability of local feature description. This
leads to a state-of-the-art normal estimator that is robust to noise, outliers
and point density variations. Extensive evaluations show that our method
outperforms previous works in both unoriented and oriented normal estimation on
widely used benchmarks. The source code and pre-trained models are available at
https://github.com/LeoQLi/NGLO.
Related papers
- NeuralGF: Unsupervised Point Normal Estimation by Learning Neural
Gradient Function [55.86697795177619]
Normal estimation for 3D point clouds is a fundamental task in 3D geometry processing.
We introduce a new paradigm for learning neural gradient functions, which encourages the neural network to fit the input point clouds.
Our excellent results on widely used benchmarks demonstrate that our method can learn more accurate normals for both unoriented and oriented normal estimation tasks.
arXiv Detail & Related papers (2023-11-01T09:25:29Z) - Angle based dynamic learning rate for gradient descent [2.5077510176642805]
We propose a novel yet simple approach to obtain an adaptive learning rate for gradient-based descent methods on classification tasks.
Instead of the traditional approach of selecting adaptive learning rates via the expectation of gradient-based terms, we use the angle between the current gradient and the new gradient.
We find that our method leads to the highest accuracy in most of the datasets.
arXiv Detail & Related papers (2023-04-20T16:55:56Z) - Implicit Bias in Leaky ReLU Networks Trained on High-Dimensional Data [63.34506218832164]
In this work, we investigate the implicit bias of gradient flow and gradient descent in two-layer fully-connected neural networks with ReLU activations.
For gradient flow, we leverage recent work on the implicit bias for homogeneous neural networks to show that leakyally, gradient flow produces a neural network with rank at most two.
For gradient descent, provided the random variance is small enough, we show that a single step of gradient descent suffices to drastically reduce the rank of the network, and that the rank remains small throughout training.
arXiv Detail & Related papers (2022-10-13T15:09:54Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - AdaDGS: An adaptive black-box optimization method with a nonlocal
directional Gaussian smoothing gradient [3.1546318469750196]
A directional Gaussian smoothing (DGS) approach was recently proposed in (Zhang et al., 2020) and used to define a truly nonlocal gradient, referred to as the DGS gradient, for high-dimensional black-box optimization.
We present a simple, yet ingenious and efficient adaptive approach for optimization with the DGS gradient, which removes the need of hyper- parameter fine tuning.
arXiv Detail & Related papers (2020-11-03T21:20:25Z) - Reparametrizing gradient descent [0.0]
We propose an optimization algorithm which we call norm-adapted gradient descent.
Our algorithm can also be compared to quasi-Newton methods, but we seek roots rather than stationary points.
arXiv Detail & Related papers (2020-10-09T20:22:29Z) - Channel-Directed Gradients for Optimization of Convolutional Neural
Networks [50.34913837546743]
We introduce optimization methods for convolutional neural networks that can be used to improve existing gradient-based optimization in terms of generalization error.
We show that defining the gradients along the output channel direction leads to a performance boost, while other directions can be detrimental.
arXiv Detail & Related papers (2020-08-25T00:44:09Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.