Point Cloud Upsampling via Disentangled Refinement
- URL: http://arxiv.org/abs/2106.04779v1
- Date: Wed, 9 Jun 2021 02:58:42 GMT
- Title: Point Cloud Upsampling via Disentangled Refinement
- Authors: Ruihui Li, Xianzhi Li, Pheng-Ann Heng, and Chi-Wing Fu
- Abstract summary: Point clouds produced by 3D scanning are often sparse, non-uniform, and noisy.
Recent upsampling approaches aim to generate a dense point set, while achieving both distribution uniformity and proximity-to-surface.
We formulate two cascaded sub-networks, a dense generator and a spatial refiner.
- Score: 86.3641957163818
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point clouds produced by 3D scanning are often sparse, non-uniform, and
noisy. Recent upsampling approaches aim to generate a dense point set, while
achieving both distribution uniformity and proximity-to-surface, and possibly
amending small holes, all in a single network. After revisiting the task, we
propose to disentangle the task based on its multi-objective nature and
formulate two cascaded sub-networks, a dense generator and a spatial refiner.
The dense generator infers a coarse but dense output that roughly describes the
underlying surface, while the spatial refiner further fine-tunes the coarse
output by adjusting the location of each point. Specifically, we design a pair
of local and global refinement units in the spatial refiner to evolve a coarse
feature map. Also, in the spatial refiner, we regress a per-point offset vector
to further adjust the coarse outputs in fine-scale. Extensive qualitative and
quantitative results on both synthetic and real-scanned datasets demonstrate
the superiority of our method over the state-of-the-arts.
Related papers
- Arbitrary-Scale Point Cloud Upsampling by Voxel-Based Network with
Latent Geometric-Consistent Learning [52.825441454264585]
We propose an arbitrary-scale Point cloud Upsampling framework using Voxel-based Network (textbfPU-VoxelNet)
Thanks to the completeness and regularity inherited from the voxel representation, voxel-based networks are capable of providing predefined grid space to approximate 3D surface.
A density-guided grid resampling method is developed to generate high-fidelity points while effectively avoiding sampling outliers.
arXiv Detail & Related papers (2024-03-08T07:31:14Z) - iPUNet:Iterative Cross Field Guided Point Cloud Upsampling [20.925921503694894]
Point clouds acquired by 3D scanning devices are often sparse, noisy, and non-uniform, causing a loss of geometric features.
We present a learning-based point upsampling method, iPUNet, which generates dense and uniform points at arbitrary ratios.
We demonstrate that iPUNet is robust to handle noisy and non-uniformly distributed inputs, and outperforms state-of-the-art point cloud upsampling methods.
arXiv Detail & Related papers (2023-10-13T13:24:37Z) - Arbitrary point cloud upsampling via Dual Back-Projection Network [12.344557879284219]
We propose a Dual Back-Projection network for point cloud upsampling (DBPnet)
A Dual Back-Projection is formulated in an up-down-up manner for point cloud upsampling.
Experimental results show that the proposed method achieves the lowest point set matching losses.
arXiv Detail & Related papers (2023-07-18T06:11:09Z) - SketchSampler: Sketch-based 3D Reconstruction via View-dependent Depth
Sampling [75.957103837167]
Reconstructing a 3D shape based on a single sketch image is challenging due to the large domain gap between a sparse, irregular sketch and a regular, dense 3D shape.
Existing works try to employ the global feature extracted from sketch to directly predict the 3D coordinates, but they usually suffer from losing fine details that are not faithful to the input sketch.
arXiv Detail & Related papers (2022-08-14T16:37:51Z) - Density-preserving Deep Point Cloud Compression [72.0703956923403]
We propose a novel deep point cloud compression method that preserves local density information.
Our method works in an auto-encoder fashion: the encoder downsamples the points and learns point-wise features, while the decoder upsamples the points using these features.
arXiv Detail & Related papers (2022-04-27T03:42:15Z) - PUFA-GAN: A Frequency-Aware Generative Adversarial Network for 3D Point
Cloud Upsampling [56.463507980857216]
We propose a generative adversarial network for point cloud upsampling.
It can make the upsampled points evenly distributed on the underlying surface but also efficiently generate clean high frequency regions.
arXiv Detail & Related papers (2022-03-02T07:47:46Z) - SPU-Net: Self-Supervised Point Cloud Upsampling by Coarse-to-Fine
Reconstruction with Self-Projection Optimization [52.20602782690776]
It is expensive and tedious to obtain large scale paired sparse-canned point sets for training from real scanned sparse data.
We propose a self-supervised point cloud upsampling network, named SPU-Net, to capture the inherent upsampling patterns of points lying on the underlying object surface.
We conduct various experiments on both synthetic and real-scanned datasets, and the results demonstrate that we achieve comparable performance to the state-of-the-art supervised methods.
arXiv Detail & Related papers (2020-12-08T14:14:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.