Deep Point Cloud Simplification for High-quality Surface Reconstruction
- URL: http://arxiv.org/abs/2203.09088v1
- Date: Thu, 17 Mar 2022 05:22:25 GMT
- Title: Deep Point Cloud Simplification for High-quality Surface Reconstruction
- Authors: Yuanqi Li, Jianwei Guo, Xinran Yang, Shun Liu, Jie Guo, Xiaopeng
Zhang, Yanwen Guo
- Abstract summary: PCS-Net is dedicated to high-quality surface mesh reconstruction while maintaining geometric fidelity.
We propose a novel double-scale resampling module to refine the positions of the sampled points.
To further retain important shape features, an adaptive sampling strategy with a novel saliency loss is designed.
- Score: 24.29310178063052
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The growing size of point clouds enlarges consumptions of storage,
transmission, and computation of 3D scenes. Raw data is redundant, noisy, and
non-uniform. Therefore, simplifying point clouds for achieving compact, clean,
and uniform points is becoming increasingly important for 3D vision and
graphics tasks. Previous learning based methods aim to generate fewer points
for scene understanding, regardless of the quality of surface reconstruction,
leading to results with low reconstruction accuracy and bad point distribution.
In this paper, we propose a novel point cloud simplification network (PCS-Net)
dedicated to high-quality surface mesh reconstruction while maintaining
geometric fidelity. We first learn a sampling matrix in a feature-aware
simplification module to reduce the number of points. Then we propose a novel
double-scale resampling module to refine the positions of the sampled points,
to achieve a uniform distribution. To further retain important shape features,
an adaptive sampling strategy with a novel saliency loss is designed. With our
PCS-Net, the input non-uniform and noisy point cloud can be simplified in a
feature-aware manner, i.e., points near salient features are consolidated but
still with uniform distribution locally. Experiments demonstrate the
effectiveness of our method and show that we outperform previous simplification
or reconstruction-oriented upsampling methods.
Related papers
- iPUNet:Iterative Cross Field Guided Point Cloud Upsampling [20.925921503694894]
Point clouds acquired by 3D scanning devices are often sparse, noisy, and non-uniform, causing a loss of geometric features.
We present a learning-based point upsampling method, iPUNet, which generates dense and uniform points at arbitrary ratios.
We demonstrate that iPUNet is robust to handle noisy and non-uniformly distributed inputs, and outperforms state-of-the-art point cloud upsampling methods.
arXiv Detail & Related papers (2023-10-13T13:24:37Z) - Arbitrary point cloud upsampling via Dual Back-Projection Network [12.344557879284219]
We propose a Dual Back-Projection network for point cloud upsampling (DBPnet)
A Dual Back-Projection is formulated in an up-down-up manner for point cloud upsampling.
Experimental results show that the proposed method achieves the lowest point set matching losses.
arXiv Detail & Related papers (2023-07-18T06:11:09Z) - Parametric Surface Constrained Upsampler Network for Point Cloud [33.033469444588086]
We introduce a novel surface regularizer into the upsampler network by forcing the neural network to learn the underlying parametric surface represented by bicubic functions and rotation functions.
These designs are integrated into two different networks for two tasks that take advantages of upsampling layers.
The state-of-the-art experimental results on both tasks demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2023-03-14T21:12:54Z) - BIMS-PU: Bi-Directional and Multi-Scale Point Cloud Upsampling [60.257912103351394]
We develop a new point cloud upsampling pipeline called BIMS-PU.
We decompose the up/downsampling procedure into several up/downsampling sub-steps by breaking the target sampling factor into smaller factors.
We show that our method achieves superior results to state-of-the-art approaches.
arXiv Detail & Related papers (2022-06-25T13:13:37Z) - PUFA-GAN: A Frequency-Aware Generative Adversarial Network for 3D Point
Cloud Upsampling [56.463507980857216]
We propose a generative adversarial network for point cloud upsampling.
It can make the upsampled points evenly distributed on the underlying surface but also efficiently generate clean high frequency regions.
arXiv Detail & Related papers (2022-03-02T07:47:46Z) - Revisiting Point Cloud Simplification: A Learnable Feature Preserving
Approach [57.67932970472768]
Mesh and Point Cloud simplification methods aim to reduce the complexity of 3D models while retaining visual quality and relevant salient features.
We propose a fast point cloud simplification method by learning to sample salient points.
The proposed method relies on a graph neural network architecture trained to select an arbitrary, user-defined, number of points from the input space and to re-arrange their positions so as to minimize the visual perception error.
arXiv Detail & Related papers (2021-09-30T10:23:55Z) - SPU-Net: Self-Supervised Point Cloud Upsampling by Coarse-to-Fine
Reconstruction with Self-Projection Optimization [52.20602782690776]
It is expensive and tedious to obtain large scale paired sparse-canned point sets for training from real scanned sparse data.
We propose a self-supervised point cloud upsampling network, named SPU-Net, to capture the inherent upsampling patterns of points lying on the underlying object surface.
We conduct various experiments on both synthetic and real-scanned datasets, and the results demonstrate that we achieve comparable performance to the state-of-the-art supervised methods.
arXiv Detail & Related papers (2020-12-08T14:14:09Z) - Learning Occupancy Function from Point Clouds for Surface Reconstruction [6.85316573653194]
Implicit function based surface reconstruction has been studied for a long time to recover 3D shapes from point clouds sampled from surfaces.
This paper proposes a novel method for learning occupancy functions from sparse point clouds and achieves better performance on challenging surface reconstruction tasks.
arXiv Detail & Related papers (2020-10-22T02:07:29Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z) - GRNet: Gridding Residual Network for Dense Point Cloud Completion [54.43648460932248]
Estimating the complete 3D point cloud from an incomplete one is a key problem in many vision and robotics applications.
We propose a novel Gridding Residual Network (GRNet) for point cloud completion.
Experimental results indicate that the proposed GRNet performs favorably against state-of-the-art methods on the ShapeNet, Completion3D, and KITTI benchmarks.
arXiv Detail & Related papers (2020-06-06T02:46:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.