Semi-signed neural fitting for surface reconstruction from unoriented
point clouds
- URL: http://arxiv.org/abs/2206.06715v1
- Date: Tue, 14 Jun 2022 09:40:17 GMT
- Title: Semi-signed neural fitting for surface reconstruction from unoriented
point clouds
- Authors: Runsong Zhu, Di Kang, Ka-Hei Hui, Yue Qian, Xuefei Zhe, Zhen Dong,
Linchao Bao, Chi-Wing Fu
- Abstract summary: We propose SSN-Fitting to reconstruct a better signed distance field.
SSN-Fitting consists of a semi-signed supervision and a loss-based region sampling strategy.
We conduct experiments to demonstrate that SSN-Fitting achieves state-of-the-art performance under different settings.
- Score: 53.379712818791894
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Reconstructing 3D geometry from \emph{unoriented} point clouds can benefit
many downstream tasks. Recent methods mostly adopt a neural shape
representation with a neural network to represent a signed distance field and
fit the point cloud with an unsigned supervision. However, we observe that
using unsigned supervision may cause severe ambiguities and often leads to
\emph{unexpected} failures such as generating undesired surfaces in free space
when reconstructing complex structures and struggle with reconstructing
accurate surfaces. To reconstruct a better signed distance field, we propose
semi-signed neural fitting (SSN-Fitting), which consists of a semi-signed
supervision and a loss-based region sampling strategy. Our key insight is that
signed supervision is more informative and regions that are obviously outside
the object can be easily determined. Meanwhile, a novel importance sampling is
proposed to accelerate the optimization and better reconstruct the fine
details. Specifically, we voxelize and partition the object space into
\emph{sign-known} and \emph{sign-uncertain} regions, in which different
supervisions are applied. Also, we adaptively adjust the sampling rate of each
voxel according to the tracked reconstruction loss, so that the network can
focus more on the complex under-fitting regions. We conduct extensive
experiments to demonstrate that SSN-Fitting achieves state-of-the-art
performance under different settings on multiple datasets, including clean,
density-varying, and noisy data.
Related papers
- GeoTransfer : Generalizable Few-Shot Multi-View Reconstruction via Transfer Learning [8.452349885923507]
We present a novel approach for sparse 3D reconstruction by leveraging the expressive power of Neural Radiance Fields (NeRFs)
Our proposed method offers the best of both worlds by transferring the information encoded in NeRF features to derive an accurate occupancy field representation.
We evaluate our approach on the DTU dataset and demonstrate state-of-the-art performance in terms of reconstruction accuracy.
arXiv Detail & Related papers (2024-08-27T01:28:15Z) - Critical Regularizations for Neural Surface Reconstruction in the Wild [26.460011241432092]
We present RegSDF, which shows that proper point cloud supervisions and geometry regularizations are sufficient to produce high-quality and robust reconstruction results.
RegSDF is able to reconstruct surfaces with fine details even for open scenes with complex topologies and unstructured camera trajectories.
arXiv Detail & Related papers (2022-06-07T08:11:22Z) - iSDF: Real-Time Neural Signed Distance Fields for Robot Perception [64.80458128766254]
iSDF is a continuous learning system for real-time signed distance field reconstruction.
It produces more accurate reconstructions and better approximations of collision costs and gradients.
arXiv Detail & Related papers (2022-04-05T15:48:39Z) - Deep Surface Reconstruction from Point Clouds with Visibility
Information [66.05024551590812]
We present two simple ways to augment raw point clouds with visibility information, so it can directly be leveraged by surface reconstruction networks with minimal adaptation.
Our proposed modifications consistently improve the accuracy of generated surfaces as well as the generalization ability of the networks to unseen shape domains.
arXiv Detail & Related papers (2022-02-03T19:33:47Z) - Sign-Agnostic CONet: Learning Implicit Surface Reconstructions by
Sign-Agnostic Optimization of Convolutional Occupancy Networks [39.65056638604885]
We learn implicit surface reconstruction by sign-agnostic optimization of convolutional occupancy networks.
We show this goal can be effectively achieved by a simple yet effective design.
arXiv Detail & Related papers (2021-05-08T03:35:32Z) - Neural-Pull: Learning Signed Distance Functions from Point Clouds by
Learning to Pull Space onto Surfaces [68.12457459590921]
Reconstructing continuous surfaces from 3D point clouds is a fundamental operation in 3D geometry processing.
We introduce textitNeural-Pull, a new approach that is simple and leads to high quality SDFs.
arXiv Detail & Related papers (2020-11-26T23:18:10Z) - Reconfigurable Voxels: A New Representation for LiDAR-Based Point Clouds [76.52448276587707]
We propose Reconfigurable Voxels, a new approach to constructing representations from 3D point clouds.
Specifically, we devise a biased random walk scheme, which adaptively covers each neighborhood with a fixed number of voxels.
We find that this approach effectively improves the stability of voxel features, especially for sparse regions.
arXiv Detail & Related papers (2020-04-06T15:07:16Z) - A Rotation-Invariant Framework for Deep Point Cloud Analysis [132.91915346157018]
We introduce a new low-level purely rotation-invariant representation to replace common 3D Cartesian coordinates as the network inputs.
Also, we present a network architecture to embed these representations into features, encoding local relations between points and their neighbors, and the global shape structure.
We evaluate our method on multiple point cloud analysis tasks, including shape classification, part segmentation, and shape retrieval.
arXiv Detail & Related papers (2020-03-16T14:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.