DiGS : Divergence guided shape implicit neural representation for
unoriented point clouds
- URL: http://arxiv.org/abs/2106.10811v3
- Date: Wed, 17 May 2023 07:45:15 GMT
- Title: DiGS : Divergence guided shape implicit neural representation for
unoriented point clouds
- Authors: Yizhak Ben-Shabat, Chamin Hewa Koneputugodage, Stephen Gould
- Abstract summary: Shape implicit neural representations (INRs) have recently shown to be effective in shape analysis and reconstruction tasks.
We propose a divergence guided shape representation learning approach that does not require normal vectors as input.
- Score: 36.60407995156801
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Shape implicit neural representations (INRs) have recently shown to be
effective in shape analysis and reconstruction tasks. Existing INRs require
point coordinates to learn the implicit level sets of the shape. When a normal
vector is available for each point, a higher fidelity representation can be
learned, however normal vectors are often not provided as raw data.
Furthermore, the method's initialization has been shown to play a crucial role
for surface reconstruction. In this paper, we propose a divergence guided shape
representation learning approach that does not require normal vectors as input.
We show that incorporating a soft constraint on the divergence of the distance
function favours smooth solutions that reliably orients gradients to match the
unknown normal at each point, in some cases even better than approaches that
use ground truth normal vectors directly. Additionally, we introduce a novel
geometric initialization method for sinusoidal INRs that further improves
convergence to the desired solution. We evaluate the effectiveness of our
approach on the task of surface reconstruction and shape space learning and
show SOTA performance compared to other unoriented methods. Code and model
parameters available at our project page https://chumbyte.github.io/DiGS-Site/.
Related papers
- NeuralGF: Unsupervised Point Normal Estimation by Learning Neural
Gradient Function [55.86697795177619]
Normal estimation for 3D point clouds is a fundamental task in 3D geometry processing.
We introduce a new paradigm for learning neural gradient functions, which encourages the neural network to fit the input point clouds.
Our excellent results on widely used benchmarks demonstrate that our method can learn more accurate normals for both unoriented and oriented normal estimation tasks.
arXiv Detail & Related papers (2023-11-01T09:25:29Z) - Neural Gradient Learning and Optimization for Oriented Point Normal
Estimation [53.611206368815125]
We propose a deep learning approach to learn gradient vectors with consistent orientation from 3D point clouds for normal estimation.
We learn an angular distance field based on local plane geometry to refine the coarse gradient vectors.
Our method efficiently conducts global gradient approximation while achieving better accuracy and ability generalization of local feature description.
arXiv Detail & Related papers (2023-09-17T08:35:11Z) - Neural Vector Fields: Implicit Representation by Explicit Learning [63.337294707047036]
We propose a novel 3D representation method, Neural Vector Fields (NVF)
It not only adopts the explicit learning process to manipulate meshes directly, but also the implicit representation of unsigned distance functions (UDFs)
Our method first predicts displacement queries towards the surface and models shapes as text reconstructions.
arXiv Detail & Related papers (2023-03-08T02:36:09Z) - GeoUDF: Surface Reconstruction from 3D Point Clouds via Geometry-guided
Distance Representation [73.77505964222632]
We present a learning-based method, namely GeoUDF, to tackle the problem of reconstructing a discrete surface from a sparse point cloud.
To be specific, we propose a geometry-guided learning method for UDF and its gradient estimation.
To extract triangle meshes from the predicted UDF, we propose a customized edge-based marching cube module.
arXiv Detail & Related papers (2022-11-30T06:02:01Z) - Neural Fields as Learnable Kernels for 3D Reconstruction [101.54431372685018]
We present a novel method for reconstructing implicit 3D shapes based on a learned kernel ridge regression.
Our technique achieves state-of-the-art results when reconstructing 3D objects and large scenes from sparse oriented points.
arXiv Detail & Related papers (2021-11-26T18:59:04Z) - Deep Point Cloud Normal Estimation via Triplet Learning [12.271669779096076]
We propose a novel normal estimation method for point clouds.
It consists of two phases: (a) feature encoding which learns representations of local patches, and (b) normal estimation that takes the learned representation as input and regresses the normal vector.
Our method preserves sharp features and achieves better normal estimation results on CAD-like shapes.
arXiv Detail & Related papers (2021-10-20T11:16:00Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z) - Sketchy Empirical Natural Gradient Methods for Deep Learning [20.517823521066234]
We develop an efficient sketchy empirical gradient method (SENG) for large-scale deep learning problems.
A distributed version of SENG is also developed for extremely large-scale applications.
On the task ResNet50 with ImageNet-1k, SENG achieves 75.9% Top-1 testing accuracy within 41 epochs.
arXiv Detail & Related papers (2020-06-10T16:17:09Z) - Implicit Geometric Regularization for Learning Shapes [34.052738965233445]
We offer a new paradigm for computing high fidelity implicit neural representations directly from raw data.
We show that our method leads to state of the art implicit neural representations with higher level-of-details and fidelity compared to previous methods.
arXiv Detail & Related papers (2020-02-24T07:36:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.