Towards Better Gradient Consistency for Neural Signed Distance Functions
via Level Set Alignment
- URL: http://arxiv.org/abs/2305.11601v1
- Date: Fri, 19 May 2023 11:28:05 GMT
- Title: Towards Better Gradient Consistency for Neural Signed Distance Functions
via Level Set Alignment
- Authors: Baorui Ma, Junsheng Zhou, Yu-Shen Liu, Zhizhong Han
- Abstract summary: We show that gradient consistency in the field, indicated by the parallelism of level sets, is the key factor affecting the inference accuracy.
We propose a level set alignment loss to evaluate the parallelism of level sets, which can be minimized to achieve better gradient consistency.
- Score: 50.892158511845466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural signed distance functions (SDFs) have shown remarkable capability in
representing geometry with details. However, without signed distance
supervision, it is still a challenge to infer SDFs from point clouds or
multi-view images using neural networks. In this paper, we claim that gradient
consistency in the field, indicated by the parallelism of level sets, is the
key factor affecting the inference accuracy. Hence, we propose a level set
alignment loss to evaluate the parallelism of level sets, which can be
minimized to achieve better gradient consistency. Our novelty lies in that we
can align all level sets to the zero level set by constraining gradients at
queries and their projections on the zero level set in an adaptive way. Our
insight is to propagate the zero level set to everywhere in the field through
consistent gradients to eliminate uncertainty in the field that is caused by
the discreteness of 3D point clouds or the lack of observations from multi-view
images. Our proposed loss is a general term which can be used upon different
methods to infer SDFs from 3D point clouds and multi-view images. Our numerical
and visual comparisons demonstrate that our loss can significantly improve the
accuracy of SDFs inferred from point clouds or multi-view images under various
benchmarks. Code and data are available at
https://github.com/mabaorui/TowardsBetterGradient .
Related papers
- Implicit Filtering for Learning Neural Signed Distance Functions from 3D Point Clouds [34.774577477968805]
We propose a novel non-linear implicit filter to smooth the implicit field while preserving geometry details.
Our novelty lies in that we can filter the surface (zero level set) by the neighbor input points with gradients of the signed distance field.
By moving the input raw point clouds along the gradient, our proposed implicit filtering can be extended to non-zero level sets.
arXiv Detail & Related papers (2024-07-18T09:40:24Z) - Fast Learning of Signed Distance Functions from Noisy Point Clouds via Noise to Noise Mapping [54.38209327518066]
Learning signed distance functions from point clouds is an important task in 3D computer vision.
We propose to learn SDFs via a noise to noise mapping, which does not require any clean point cloud or ground truth supervision.
Our novelty lies in the noise to noise mapping which can infer a highly accurate SDF of a single object or scene from its multiple or even single noisy observations.
arXiv Detail & Related papers (2024-07-04T03:35:02Z) - Unsupervised Occupancy Learning from Sparse Point Cloud [8.732260277121547]
Implicit Neural Representations have gained prominence as a powerful framework for capturing complex data modalities.
In this paper, we propose a method to infer occupancy fields instead of Neural Signed Distance Functions.
We highlight its capacity to improve implicit shape inference with respect to baselines and the state-of-the-art using synthetic and real data.
arXiv Detail & Related papers (2024-04-03T14:05:39Z) - PosDiffNet: Positional Neural Diffusion for Point Cloud Registration in
a Large Field of View with Perturbations [27.45001809414096]
PosDiffNet is a model for point cloud registration in 3D computer vision.
We leverage a graph neural partial differential equation (PDE) based on Beltrami flow to obtain high-dimensional features.
We employ the multi-level correspondence derived from the high feature similarity scores to facilitate alignment between point clouds.
We evaluate PosDiffNet on several 3D point cloud datasets, verifying that it achieves state-of-the-art (SOTA) performance for point cloud registration in large fields of view with perturbations.
arXiv Detail & Related papers (2024-01-06T08:58:15Z) - Learning a More Continuous Zero Level Set in Unsigned Distance Fields
through Level Set Projection [55.05706827963042]
Latest methods represent shapes with open surfaces using unsigned distance functions (UDFs)
We train neural networks to learn UDFs and reconstruct surfaces with the gradients around the zero level set of the UDF.
We propose to learn a more continuous zero level set in UDFs with level set projections.
arXiv Detail & Related papers (2023-08-22T13:45:35Z) - Quantity-Aware Coarse-to-Fine Correspondence for Image-to-Point Cloud
Registration [4.954184310509112]
Image-to-point cloud registration aims to determine the relative camera pose between an RGB image and a reference point cloud.
Matching individual points with pixels can be inherently ambiguous due to modality gaps.
We propose a framework to capture quantity-aware correspondences between local point sets and pixel patches.
arXiv Detail & Related papers (2023-07-14T03:55:54Z) - Ponder: Point Cloud Pre-training via Neural Rendering [93.34522605321514]
We propose a novel approach to self-supervised learning of point cloud representations by differentiable neural encoders.
The learned point-cloud can be easily integrated into various downstream tasks, including not only high-level rendering tasks like 3D detection and segmentation, but low-level tasks like 3D reconstruction and image rendering.
arXiv Detail & Related papers (2022-12-31T08:58:39Z) - On Robust Cross-View Consistency in Self-Supervised Monocular Depth Estimation [56.97699793236174]
We study two kinds of robust cross-view consistency in this paper.
We exploit the temporal coherence in both depth feature space and 3D voxel space for self-supervised monocular depth estimation.
Experimental results on several outdoor benchmarks show that our method outperforms current state-of-the-art techniques.
arXiv Detail & Related papers (2022-09-19T03:46:13Z) - Neural-Pull: Learning Signed Distance Functions from Point Clouds by
Learning to Pull Space onto Surfaces [68.12457459590921]
Reconstructing continuous surfaces from 3D point clouds is a fundamental operation in 3D geometry processing.
We introduce textitNeural-Pull, a new approach that is simple and leads to high quality SDFs.
arXiv Detail & Related papers (2020-11-26T23:18:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.