Inferring Neural Signed Distance Functions by Overfitting on Single Noisy Point Clouds through Finetuning Data-Driven based Priors
- URL: http://arxiv.org/abs/2410.19680v1
- Date: Fri, 25 Oct 2024 16:48:44 GMT
- Title: Inferring Neural Signed Distance Functions by Overfitting on Single Noisy Point Clouds through Finetuning Data-Driven based Priors
- Authors: Chao Chen, Yu-Shen Liu, Zhizhong Han,
- Abstract summary: We propose a method to promote pros of data-driven based and overfitting-based methods for better generalization, faster inference, and higher accuracy in learning neural SDFs.
We introduce a novel statistical reasoning algorithm in local regions which is able to finetune data-driven based priors without signed distance supervision, clean point cloud, or point normals.
- Score: 53.6277160912059
- License:
- Abstract: It is important to estimate an accurate signed distance function (SDF) from a point cloud in many computer vision applications. The latest methods learn neural SDFs using either a data-driven based or an overfitting-based strategy. However, these two kinds of methods are with either poor generalization or slow convergence, which limits their capability under challenging scenarios like highly noisy point clouds. To resolve this issue, we propose a method to promote pros of both data-driven based and overfitting-based methods for better generalization, faster inference, and higher accuracy in learning neural SDFs. We introduce a novel statistical reasoning algorithm in local regions which is able to finetune data-driven based priors without signed distance supervision, clean point cloud, or point normals. This helps our method start with a good initialization, and converge to a minimum in a much faster way. Our numerical and visual comparisons with the state-of-the-art methods show our superiority over these methods in surface reconstruction and point cloud denoising on widely used shape and scene benchmarks. The code is available at https://github.com/chenchao15/LocalN2NM.
Related papers
- MultiPull: Detailing Signed Distance Functions by Pulling Multi-Level Queries at Multi-Step [48.812388649469106]
We propose a novel method to learn multi-scale implicit fields from raw point clouds by optimizing accurate SDFs from coarse to fine.
Our experiments on widely used object and scene benchmarks demonstrate that our method outperforms the state-of-the-art methods in surface reconstruction.
arXiv Detail & Related papers (2024-11-02T10:50:22Z) - A Strong Baseline for Point Cloud Registration via Direct Superpoints Matching [7.308509114539376]
We propose a simple and effective baseline to find correspondences of superpoints in a global matching manner.
Our simple yet effective baseline shows comparable or even better results than state-of-the-art methods on three datasets.
arXiv Detail & Related papers (2023-07-03T21:33:40Z) - Unsupervised Inference of Signed Distance Functions from Single Sparse
Point Clouds without Learning Priors [54.966603013209685]
It is vital to infer signed distance functions (SDFs) from 3D point clouds.
We present a neural network to directly infer SDFs from single sparse point clouds.
arXiv Detail & Related papers (2023-03-25T15:56:50Z) - CAP-UDF: Learning Unsigned Distance Functions Progressively from Raw Point Clouds with Consistency-Aware Field Optimization [54.69408516025872]
CAP-UDF is a novel method to learn consistency-aware UDF from raw point clouds.
We train a neural network to gradually infer the relationship between queries and the approximated surface.
We also introduce a polygonization algorithm to extract surfaces using the gradients of the learned UDF.
arXiv Detail & Related papers (2022-10-06T08:51:08Z) - Stratified Transformer for 3D Point Cloud Segmentation [89.9698499437732]
Stratified Transformer is able to capture long-range contexts and demonstrates strong generalization ability and high performance.
To combat the challenges posed by irregular point arrangements, we propose first-layer point embedding to aggregate local information.
Experiments demonstrate the effectiveness and superiority of our method on S3DIS, ScanNetv2 and ShapeNetPart datasets.
arXiv Detail & Related papers (2022-03-28T05:35:16Z) - Revisiting Point Cloud Simplification: A Learnable Feature Preserving
Approach [57.67932970472768]
Mesh and Point Cloud simplification methods aim to reduce the complexity of 3D models while retaining visual quality and relevant salient features.
We propose a fast point cloud simplification method by learning to sample salient points.
The proposed method relies on a graph neural network architecture trained to select an arbitrary, user-defined, number of points from the input space and to re-arrange their positions so as to minimize the visual perception error.
arXiv Detail & Related papers (2021-09-30T10:23:55Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.