SPU-Net: Self-Supervised Point Cloud Upsampling by Coarse-to-Fine
Reconstruction with Self-Projection Optimization
- URL: http://arxiv.org/abs/2012.04439v1
- Date: Tue, 8 Dec 2020 14:14:09 GMT
- Title: SPU-Net: Self-Supervised Point Cloud Upsampling by Coarse-to-Fine
Reconstruction with Self-Projection Optimization
- Authors: Xinhai Liu, Xinchen Liu, Zhizhong Han, Yu-Shen Liu
- Abstract summary: It is expensive and tedious to obtain large scale paired sparse-canned point sets for training from real scanned sparse data.
We propose a self-supervised point cloud upsampling network, named SPU-Net, to capture the inherent upsampling patterns of points lying on the underlying object surface.
We conduct various experiments on both synthetic and real-scanned datasets, and the results demonstrate that we achieve comparable performance to the state-of-the-art supervised methods.
- Score: 52.20602782690776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The task of point cloud upsampling aims to acquire dense and uniform point
sets from sparse and irregular point sets. Although significant progress has
been made with deep learning models, they require ground-truth dense point sets
as the supervision information, which can only trained on synthetic paired
training data and are not suitable for training under real-scanned sparse data.
However, it is expensive and tedious to obtain large scale paired sparse-dense
point sets for training from real scanned sparse data. To address this problem,
we propose a self-supervised point cloud upsampling network, named SPU-Net, to
capture the inherent upsampling patterns of points lying on the underlying
object surface. Specifically, we propose a coarse-to-fine reconstruction
framework, which contains two main components: point feature extraction and
point feature expansion, respectively. In the point feature extraction, we
integrate self-attention module with graph convolution network (GCN) to
simultaneously capture context information inside and among local regions. In
the point feature expansion, we introduce a hierarchically learnable folding
strategy to generate the upsampled point sets with learnable 2D grids.
Moreover, to further optimize the noisy points in the generated point sets, we
propose a novel self-projection optimization associated with uniform and
reconstruction terms, as a joint loss, to facilitate the self-supervised point
cloud upsampling. We conduct various experiments on both synthetic and
real-scanned datasets, and the results demonstrate that we achieve comparable
performance to the state-of-the-art supervised methods.
Related papers
- Curvature Informed Furthest Point Sampling [0.0]
We introduce a reinforcement learning-based sampling algorithm that enhances furthest point sampling (FPS)
Our approach ranks points by combining FPS-derived soft ranks with curvature scores computed by a deep neural network.
We provide comprehensive ablation studies, with both qualitative and quantitative insights into the effect of each feature on performance.
arXiv Detail & Related papers (2024-11-25T23:58:38Z) - Point Cloud Pre-training with Diffusion Models [62.12279263217138]
We propose a novel pre-training method called Point cloud Diffusion pre-training (PointDif)
PointDif achieves substantial improvement across various real-world datasets for diverse downstream tasks such as classification, segmentation and detection.
arXiv Detail & Related papers (2023-11-25T08:10:05Z) - Point Cloud Upsampling via Cascaded Refinement Network [39.79759035338819]
Upsampling point cloud in a coarse-to-fine manner is a decent solution.
Existing coarse-to-fine upsampling methods require extra training strategies.
In this paper, we propose a simple yet effective cascaded refinement network.
arXiv Detail & Related papers (2022-10-08T07:09:37Z) - BIMS-PU: Bi-Directional and Multi-Scale Point Cloud Upsampling [60.257912103351394]
We develop a new point cloud upsampling pipeline called BIMS-PU.
We decompose the up/downsampling procedure into several up/downsampling sub-steps by breaking the target sampling factor into smaller factors.
We show that our method achieves superior results to state-of-the-art approaches.
arXiv Detail & Related papers (2022-06-25T13:13:37Z) - Self-Supervised Arbitrary-Scale Point Clouds Upsampling via Implicit
Neural Representation [79.60988242843437]
We propose a novel approach that achieves self-supervised and magnification-flexible point clouds upsampling simultaneously.
Experimental results demonstrate that our self-supervised learning based scheme achieves competitive or even better performance than supervised learning based state-of-the-art methods.
arXiv Detail & Related papers (2022-04-18T07:18:25Z) - Point Set Self-Embedding [63.23565826873297]
This work presents an innovative method for point set self-embedding, that encodes structural information of a dense point set into its sparser version in a visual but imperceptible form.
The self-embedded point set can function as the ordinary downsampled one and be visualized efficiently on mobile devices.
We can leverage the self-embedded information to fully restore the original point set for detailed analysis on remote servers.
arXiv Detail & Related papers (2022-02-28T07:03:33Z) - PointLIE: Locally Invertible Embedding for Point Cloud Sampling and
Recovery [35.353458457283544]
Point Cloud Sampling and Recovery (PCSR) is critical for massive real-time point cloud collection and processing.
We propose a novel Locally Invertible Embedding for point cloud adaptive sampling and recovery (PointLIE)
PointLIE unifies point cloud sampling and upsampling to one single framework through bi-directional learning.
arXiv Detail & Related papers (2021-04-30T05:55:59Z) - Self-Sampling for Neural Point Cloud Consolidation [83.31236364265403]
We introduce a novel technique for neural point cloud consolidation which learns from only the input point cloud.
We repeatedly self-sample the input point cloud with global subsets that are used to train a deep neural network.
We demonstrate the ability to consolidate point sets from a variety of shapes, while eliminating outliers and noise.
arXiv Detail & Related papers (2020-08-14T17:16:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.