NeuralGF: Unsupervised Point Normal Estimation by Learning Neural
Gradient Function
- URL: http://arxiv.org/abs/2311.00389v1
- Date: Wed, 1 Nov 2023 09:25:29 GMT
- Title: NeuralGF: Unsupervised Point Normal Estimation by Learning Neural
Gradient Function
- Authors: Qing Li, Huifang Feng, Kanle Shi, Yue Gao, Yi Fang, Yu-Shen Liu,
Zhizhong Han
- Abstract summary: Normal estimation for 3D point clouds is a fundamental task in 3D geometry processing.
We introduce a new paradigm for learning neural gradient functions, which encourages the neural network to fit the input point clouds.
Our excellent results on widely used benchmarks demonstrate that our method can learn more accurate normals for both unoriented and oriented normal estimation tasks.
- Score: 55.86697795177619
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Normal estimation for 3D point clouds is a fundamental task in 3D geometry
processing. The state-of-the-art methods rely on priors of fitting local
surfaces learned from normal supervision. However, normal supervision in
benchmarks comes from synthetic shapes and is usually not available from real
scans, thereby limiting the learned priors of these methods. In addition,
normal orientation consistency across shapes remains difficult to achieve
without a separate post-processing procedure. To resolve these issues, we
propose a novel method for estimating oriented normals directly from point
clouds without using ground truth normals as supervision. We achieve this by
introducing a new paradigm for learning neural gradient functions, which
encourages the neural network to fit the input point clouds and yield unit-norm
gradients at the points. Specifically, we introduce loss functions to
facilitate query points to iteratively reach the moving targets and aggregate
onto the approximated surface, thereby learning a global surface representation
of the data. Meanwhile, we incorporate gradients into the surface approximation
to measure the minimum signed deviation of queries, resulting in a consistent
gradient field associated with the surface. These techniques lead to our deep
unsupervised oriented normal estimator that is robust to noise, outliers and
density variations. Our excellent results on widely used benchmarks demonstrate
that our method can learn more accurate normals for both unoriented and
oriented normal estimation tasks than the latest methods. The source code and
pre-trained model are publicly available at https://github.com/LeoQLi/NeuralGF.
Related papers
- Neural Gradient Learning and Optimization for Oriented Point Normal
Estimation [53.611206368815125]
We propose a deep learning approach to learn gradient vectors with consistent orientation from 3D point clouds for normal estimation.
We learn an angular distance field based on local plane geometry to refine the coarse gradient vectors.
Our method efficiently conducts global gradient approximation while achieving better accuracy and ability generalization of local feature description.
arXiv Detail & Related papers (2023-09-17T08:35:11Z) - Learning Signed Hyper Surfaces for Oriented Point Cloud Normal Estimation [53.19926259132379]
We propose a novel method called SHS-Net for oriented normal estimation of point clouds by learning signed hyper surfaces.
The signed hyper surfaces are implicitly learned in a high-dimensional feature space where the local and global information is aggregated.
An attention-weighted normal prediction module is proposed as a decoder, which takes the local and global latent codes as input to predict oriented normals.
arXiv Detail & Related papers (2023-05-10T03:40:25Z) - Rethinking the Approximation Error in 3D Surface Fitting for Point Cloud
Normal Estimation [39.79759035338819]
We present two basic design principles to bridge the gap between estimated and precise surface normals.
We implement these two principles using deep neural networks, and integrate them with the state-of-the-art (SOTA) normal estimation methods in a plug-and-play manner.
arXiv Detail & Related papers (2023-03-30T05:59:43Z) - NeAF: Learning Neural Angle Fields for Point Normal Estimation [46.58627482563857]
We propose an implicit function to learn an angle field around the normal of each point in the spherical coordinate system.
Instead of directly predicting the normal of an input point, we predict the angle offset between the ground truth normal and a randomly sampled query normal.
arXiv Detail & Related papers (2022-11-30T10:11:47Z) - HSurf-Net: Normal Estimation for 3D Point Clouds by Learning Hyper
Surfaces [54.77683371400133]
We propose a novel normal estimation method called HSurf-Net, which can accurately predict normals from point clouds with noise and density variations.
Experimental results show that our HSurf-Net achieves the state-of-the-art performance on the synthetic shape dataset.
arXiv Detail & Related papers (2022-10-13T16:39:53Z) - Implicit Bias in Leaky ReLU Networks Trained on High-Dimensional Data [63.34506218832164]
In this work, we investigate the implicit bias of gradient flow and gradient descent in two-layer fully-connected neural networks with ReLU activations.
For gradient flow, we leverage recent work on the implicit bias for homogeneous neural networks to show that leakyally, gradient flow produces a neural network with rank at most two.
For gradient descent, provided the random variance is small enough, we show that a single step of gradient descent suffices to drastically reduce the rank of the network, and that the rank remains small throughout training.
arXiv Detail & Related papers (2022-10-13T15:09:54Z) - Deep Point Cloud Normal Estimation via Triplet Learning [12.271669779096076]
We propose a novel normal estimation method for point clouds.
It consists of two phases: (a) feature encoding which learns representations of local patches, and (b) normal estimation that takes the learned representation as input and regresses the normal vector.
Our method preserves sharp features and achieves better normal estimation results on CAD-like shapes.
arXiv Detail & Related papers (2021-10-20T11:16:00Z) - Virtual Normal: Enforcing Geometric Constraints for Accurate and Robust
Depth Prediction [87.08227378010874]
We show the importance of the high-order 3D geometric constraints for depth prediction.
By designing a loss term that enforces a simple geometric constraint, we significantly improve the accuracy and robustness of monocular depth estimation.
We show state-of-the-art results of learning metric depth on NYU Depth-V2 and KITTI.
arXiv Detail & Related papers (2021-03-07T00:08:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.