NeeDrop: Self-supervised Shape Representation from Sparse Point Clouds
using Needle Dropping
- URL: http://arxiv.org/abs/2111.15207v2
- Date: Thu, 2 Dec 2021 07:48:46 GMT
- Title: NeeDrop: Self-supervised Shape Representation from Sparse Point Clouds
using Needle Dropping
- Authors: Alexandre Boulch, Pierre-Alain Langlois, Gilles Puy, Renaud Marlet
- Abstract summary: We introduce NeeDrop, a self-supervised method for learning shape representations from sparse point clouds.
No shape knowledge is required and the point cloud can be highly sparse, e.g., as lidar point clouds acquired by vehicles.
We obtain quantitative results on par with existing supervised approaches on shape reconstruction datasets.
- Score: 80.2485282512607
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There has been recently a growing interest for implicit shape
representations. Contrary to explicit representations, they have no resolution
limitations and they easily deal with a wide variety of surface topologies. To
learn these implicit representations, current approaches rely on a certain
level of shape supervision (e.g., inside/outside information or
distance-to-shape knowledge), or at least require a dense point cloud (to
approximate well enough the distance-to-shape). In contrast, we introduce
NeeDrop, a self-supervised method for learning shape representations from
possibly extremely sparse point clouds. Like in Buffon's needle problem, we
"drop" (sample) needles on the point cloud and consider that, statistically,
close to the surface, the needle end points lie on opposite sides of the
surface. No shape knowledge is required and the point cloud can be highly
sparse, e.g., as lidar point clouds acquired by vehicles. Previous
self-supervised shape representation approaches fail to produce good-quality
results on this kind of data. We obtain quantitative results on par with
existing supervised approaches on shape reconstruction datasets and show
promising qualitative results on hard autonomous driving datasets such as
KITTI.
Related papers
- Point2Vec for Self-Supervised Representation Learning on Point Clouds [66.53955515020053]
We extend data2vec to the point cloud domain and report encouraging results on several downstream tasks.
We propose point2vec, which unleashes the full potential of data2vec-like pre-training on point clouds.
arXiv Detail & Related papers (2023-03-29T10:08:29Z) - Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors [52.25114448281418]
Current methods are able to reconstruct surfaces by learning Signed Distance Functions (SDFs) from single point clouds without ground truth signed distances or point normals.
We propose to reconstruct highly accurate surfaces from sparse point clouds with an on-surface prior.
Our method can learn SDFs from a single sparse point cloud without ground truth signed distances or point normals.
arXiv Detail & Related papers (2022-04-22T09:45:20Z) - Self-Supervised Arbitrary-Scale Point Clouds Upsampling via Implicit
Neural Representation [79.60988242843437]
We propose a novel approach that achieves self-supervised and magnification-flexible point clouds upsampling simultaneously.
Experimental results demonstrate that our self-supervised learning based scheme achieves competitive or even better performance than supervised learning based state-of-the-art methods.
arXiv Detail & Related papers (2022-04-18T07:18:25Z) - Unsupervised Point Cloud Representation Learning with Deep Neural
Networks: A Survey [104.71816962689296]
Unsupervised point cloud representation learning has attracted increasing attention due to the constraint in large-scale point cloud labelling.
This paper provides a comprehensive review of unsupervised point cloud representation learning using deep neural networks.
arXiv Detail & Related papers (2022-02-28T07:46:05Z) - SSPU-Net: Self-Supervised Point Cloud Upsampling via Differentiable
Rendering [21.563862632172363]
We propose a self-supervised point cloud upsampling network (SSPU-Net) to generate dense point clouds without using ground truth.
To achieve this, we exploit the consistency between the input sparse point cloud and generated dense point cloud for the shapes and rendered images.
arXiv Detail & Related papers (2021-08-01T13:26:01Z) - Self-Contrastive Learning with Hard Negative Sampling for
Self-supervised Point Cloud Learning [17.55440737986014]
We propose a novel self-contrastive learning for self-supervised point cloud representation learning.
We exploit self-similar point cloud patches within a single point cloud as positive samples and otherwise negative ones to facilitate the task of contrastive learning.
Experimental results show that the proposed method achieves state-of-the-art performance on widely used benchmark datasets.
arXiv Detail & Related papers (2021-07-05T09:17:45Z) - Learning Gradient Fields for Shape Generation [69.85355757242075]
A point cloud can be viewed as samples from a distribution of 3D points whose density is concentrated near the surface of the shape.
We generate point clouds by performing gradient ascent on an unnormalized probability density.
Our model directly predicts the gradient of the log density field and can be trained with a simple objective adapted from score-based generative models.
arXiv Detail & Related papers (2020-08-14T18:06:15Z) - Meshing Point Clouds with Predicted Intrinsic-Extrinsic Ratio Guidance [30.863194319818223]
We propose to leverage the input point cloud as much as possible, by only adding connectivity information to existing points.
Our key innovation is a surrogate of local connectivity, calculated by comparing the intrinsic/extrinsic metrics.
We demonstrate that our method can not only preserve details, handle ambiguous structures, but also possess strong generalizability to unseen categories.
arXiv Detail & Related papers (2020-07-17T22:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.