SSPU-Net: Self-Supervised Point Cloud Upsampling via Differentiable
Rendering
- URL: http://arxiv.org/abs/2108.00454v2
- Date: Tue, 3 Aug 2021 13:32:40 GMT
- Title: SSPU-Net: Self-Supervised Point Cloud Upsampling via Differentiable
Rendering
- Authors: Yifan Zhao, Le Hui, Jin Xie
- Abstract summary: We propose a self-supervised point cloud upsampling network (SSPU-Net) to generate dense point clouds without using ground truth.
To achieve this, we exploit the consistency between the input sparse point cloud and generated dense point cloud for the shapes and rendered images.
- Score: 21.563862632172363
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point clouds obtained from 3D sensors are usually sparse. Existing methods
mainly focus on upsampling sparse point clouds in a supervised manner by using
dense ground truth point clouds. In this paper, we propose a self-supervised
point cloud upsampling network (SSPU-Net) to generate dense point clouds
without using ground truth. To achieve this, we exploit the consistency between
the input sparse point cloud and generated dense point cloud for the shapes and
rendered images. Specifically, we first propose a neighbor expansion unit (NEU)
to upsample the sparse point clouds, where the local geometric structures of
the sparse point clouds are exploited to learn weights for point interpolation.
Then, we develop a differentiable point cloud rendering unit (DRU) as an
end-to-end module in our network to render the point cloud into multi-view
images. Finally, we formulate a shape-consistent loss and an image-consistent
loss to train the network so that the shapes of the sparse and dense point
clouds are as consistent as possible. Extensive results on the CAD and scanned
datasets demonstrate that our method can achieve impressive results in a
self-supervised manner. Code is available at
https://github.com/fpthink/SSPU-Net.
Related papers
- Efficient Point Clouds Upsampling via Flow Matching [16.948354780275388]
Existing diffusion models struggle with inefficiencies as they map Gaussian noise to real point clouds.
We propose PUFM, a flow matching approach to directly map sparse point clouds to their high-fidelity dense counterparts.
Our method delivers superior upsampling quality but with fewer sampling steps.
arXiv Detail & Related papers (2025-01-25T17:50:53Z) - GPN: Generative Point-based NeRF [0.65268245109828]
We propose using Generative Point-based NeRF (GPN) to reconstruct and repair a partial cloud.
The repaired point cloud can achieve multi-view consistency with the captured images at high spatial resolution.
arXiv Detail & Related papers (2024-04-12T08:14:17Z) - ComPC: Completing a 3D Point Cloud with 2D Diffusion Priors [52.72867922938023]
3D point clouds directly collected from objects through sensors are often incomplete due to self-occlusion.
We propose a test-time framework for completing partial point clouds across unseen categories without any requirement for training.
arXiv Detail & Related papers (2024-04-10T08:02:17Z) - Learning Continuous Implicit Field with Local Distance Indicator for
Arbitrary-Scale Point Cloud Upsampling [55.05706827963042]
Point cloud upsampling aims to generate dense and uniformly distributed point sets from a sparse point cloud.
Previous methods typically split a sparse point cloud into several local patches, upsample patch points, and merge all upsampled patches.
We propose a novel approach that learns an unsigned distance field guided by local priors for point cloud upsampling.
arXiv Detail & Related papers (2023-12-23T01:52:14Z) - Learning Neural Volumetric Field for Point Cloud Geometry Compression [13.691147541041804]
We propose to code the geometry of a given point cloud by learning a neural field.
We divide the entire space into small cubes and represent each non-empty cube by a neural network and an input latent code.
The network is shared among all the cubes in a single frame or multiple frames, to exploit the spatial and temporal redundancy.
arXiv Detail & Related papers (2022-12-11T19:55:24Z) - GRASP-Net: Geometric Residual Analysis and Synthesis for Point Cloud
Compression [16.98171403698783]
We propose a heterogeneous approach with deep learning for lossy point cloud geometry compression.
Specifically, a point-based network is applied to convert the erratic local details to latent features residing on the coarse point cloud.
arXiv Detail & Related papers (2022-09-09T17:09:02Z) - Unsupervised Point Cloud Representation Learning with Deep Neural
Networks: A Survey [104.71816962689296]
Unsupervised point cloud representation learning has attracted increasing attention due to the constraint in large-scale point cloud labelling.
This paper provides a comprehensive review of unsupervised point cloud representation learning using deep neural networks.
arXiv Detail & Related papers (2022-02-28T07:46:05Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z) - "Zero Shot" Point Cloud Upsampling [4.737519767218666]
We present an unsupervised approach to upsample point clouds internally referred as "Zero Shot" Point Cloud Upsampling (ZSPU) at holistic level.
Our approach is solely based on the internal information provided by a particular point cloud without patching in both self-training and testing phases.
ZSPU achieves superior qualitative results on shapes with complex local details or high curvatures.
arXiv Detail & Related papers (2021-06-25T17:06:18Z) - Self-Sampling for Neural Point Cloud Consolidation [83.31236364265403]
We introduce a novel technique for neural point cloud consolidation which learns from only the input point cloud.
We repeatedly self-sample the input point cloud with global subsets that are used to train a deep neural network.
We demonstrate the ability to consolidate point sets from a variety of shapes, while eliminating outliers and noise.
arXiv Detail & Related papers (2020-08-14T17:16:02Z) - GRNet: Gridding Residual Network for Dense Point Cloud Completion [54.43648460932248]
Estimating the complete 3D point cloud from an incomplete one is a key problem in many vision and robotics applications.
We propose a novel Gridding Residual Network (GRNet) for point cloud completion.
Experimental results indicate that the proposed GRNet performs favorably against state-of-the-art methods on the ShapeNet, Completion3D, and KITTI benchmarks.
arXiv Detail & Related papers (2020-06-06T02:46:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.