Arbitrary point cloud upsampling via Dual Back-Projection Network
- URL: http://arxiv.org/abs/2307.08992v1
- Date: Tue, 18 Jul 2023 06:11:09 GMT
- Title: Arbitrary point cloud upsampling via Dual Back-Projection Network
- Authors: Zhi-Song Liu, Zijia Wang, Zhen Jia
- Abstract summary: We propose a Dual Back-Projection network for point cloud upsampling (DBPnet)
A Dual Back-Projection is formulated in an up-down-up manner for point cloud upsampling.
Experimental results show that the proposed method achieves the lowest point set matching losses.
- Score: 12.344557879284219
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Point clouds acquired from 3D sensors are usually sparse and noisy. Point
cloud upsampling is an approach to increase the density of the point cloud so
that detailed geometric information can be restored. In this paper, we propose
a Dual Back-Projection network for point cloud upsampling (DBPnet). A Dual
Back-Projection is formulated in an up-down-up manner for point cloud
upsampling. It not only back projects feature residues but also coordinates
residues so that the network better captures the point correlations in the
feature and space domains, achieving lower reconstruction errors on both
uniform and non-uniform sparse point clouds. Our proposed method is also
generalizable for arbitrary upsampling tasks (e.g. 4x, 5.5x). Experimental
results show that the proposed method achieves the lowest point set matching
losses with respect to the benchmark. In addition, the success of our approach
demonstrates that generative networks are not necessarily needed for
non-uniform point clouds.
Related papers
- Point Cloud Pre-training with Diffusion Models [62.12279263217138]
We propose a novel pre-training method called Point cloud Diffusion pre-training (PointDif)
PointDif achieves substantial improvement across various real-world datasets for diverse downstream tasks such as classification, segmentation and detection.
arXiv Detail & Related papers (2023-11-25T08:10:05Z) - Parametric Surface Constrained Upsampler Network for Point Cloud [33.033469444588086]
We introduce a novel surface regularizer into the upsampler network by forcing the neural network to learn the underlying parametric surface represented by bicubic functions and rotation functions.
These designs are integrated into two different networks for two tasks that take advantages of upsampling layers.
The state-of-the-art experimental results on both tasks demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2023-03-14T21:12:54Z) - Self-Supervised Arbitrary-Scale Point Clouds Upsampling via Implicit
Neural Representation [79.60988242843437]
We propose a novel approach that achieves self-supervised and magnification-flexible point clouds upsampling simultaneously.
Experimental results demonstrate that our self-supervised learning based scheme achieves competitive or even better performance than supervised learning based state-of-the-art methods.
arXiv Detail & Related papers (2022-04-18T07:18:25Z) - Deep Point Cloud Simplification for High-quality Surface Reconstruction [24.29310178063052]
PCS-Net is dedicated to high-quality surface mesh reconstruction while maintaining geometric fidelity.
We propose a novel double-scale resampling module to refine the positions of the sampled points.
To further retain important shape features, an adaptive sampling strategy with a novel saliency loss is designed.
arXiv Detail & Related papers (2022-03-17T05:22:25Z) - PUFA-GAN: A Frequency-Aware Generative Adversarial Network for 3D Point
Cloud Upsampling [56.463507980857216]
We propose a generative adversarial network for point cloud upsampling.
It can make the upsampled points evenly distributed on the underlying surface but also efficiently generate clean high frequency regions.
arXiv Detail & Related papers (2022-03-02T07:47:46Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z) - Deep Point Cloud Reconstruction [74.694733918351]
Point cloud obtained from 3D scanning is often sparse, noisy, and irregular.
To cope with these issues, recent studies have been separately conducted to densify, denoise, and complete inaccurate point cloud.
We propose a deep point cloud reconstruction network consisting of two stages: 1) a 3D sparse stacked-hourglass network as for the initial densification and denoising, 2) a refinement via transformers converting the discrete voxels into 3D points.
arXiv Detail & Related papers (2021-11-23T07:53:28Z) - GRNet: Gridding Residual Network for Dense Point Cloud Completion [54.43648460932248]
Estimating the complete 3D point cloud from an incomplete one is a key problem in many vision and robotics applications.
We propose a novel Gridding Residual Network (GRNet) for point cloud completion.
Experimental results indicate that the proposed GRNet performs favorably against state-of-the-art methods on the ShapeNet, Completion3D, and KITTI benchmarks.
arXiv Detail & Related papers (2020-06-06T02:46:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.