PUFA-GAN: A Frequency-Aware Generative Adversarial Network for 3D Point
Cloud Upsampling
- URL: http://arxiv.org/abs/2203.00914v1
- Date: Wed, 2 Mar 2022 07:47:46 GMT
- Title: PUFA-GAN: A Frequency-Aware Generative Adversarial Network for 3D Point
Cloud Upsampling
- Authors: Hao Liu, Hui Yuan, Junhui Hou, Raouf Hamzaoui, Wei Gao
- Abstract summary: We propose a generative adversarial network for point cloud upsampling.
It can make the upsampled points evenly distributed on the underlying surface but also efficiently generate clean high frequency regions.
- Score: 56.463507980857216
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We propose a generative adversarial network for point cloud upsampling, which
can not only make the upsampled points evenly distributed on the underlying
surface but also efficiently generate clean high frequency regions. The
generator of our network includes a dynamic graph hierarchical residual
aggregation unit and a hierarchical residual aggregation unit for point feature
extraction and upsampling, respectively. The former extracts multiscale
point-wise descriptive features, while the latter captures rich feature details
with hierarchical residuals. To generate neat edges, our discriminator uses a
graph filter to extract and retain high frequency points. The generated high
resolution point cloud and corresponding high frequency points help the
discriminator learn the global and high frequency properties of the point
cloud. We also propose an identity distribution loss function to make sure that
the upsampled points remain on the underlying surface of the input low
resolution point cloud. To assess the regularity of the upsampled points in
high frequency regions, we introduce two evaluation metrics. Objective and
subjective results demonstrate that the visual quality of the upsampled point
clouds generated by our method is better than that of the state-of-the-art
methods.
Related papers
- Arbitrary-Scale Point Cloud Upsampling by Voxel-Based Network with
Latent Geometric-Consistent Learning [52.825441454264585]
We propose an arbitrary-scale Point cloud Upsampling framework using Voxel-based Network (textbfPU-VoxelNet)
Thanks to the completeness and regularity inherited from the voxel representation, voxel-based networks are capable of providing predefined grid space to approximate 3D surface.
A density-guided grid resampling method is developed to generate high-fidelity points while effectively avoiding sampling outliers.
arXiv Detail & Related papers (2024-03-08T07:31:14Z) - Point Cloud Pre-training with Diffusion Models [62.12279263217138]
We propose a novel pre-training method called Point cloud Diffusion pre-training (PointDif)
PointDif achieves substantial improvement across various real-world datasets for diverse downstream tasks such as classification, segmentation and detection.
arXiv Detail & Related papers (2023-11-25T08:10:05Z) - iPUNet:Iterative Cross Field Guided Point Cloud Upsampling [20.925921503694894]
Point clouds acquired by 3D scanning devices are often sparse, noisy, and non-uniform, causing a loss of geometric features.
We present a learning-based point upsampling method, iPUNet, which generates dense and uniform points at arbitrary ratios.
We demonstrate that iPUNet is robust to handle noisy and non-uniformly distributed inputs, and outperforms state-of-the-art point cloud upsampling methods.
arXiv Detail & Related papers (2023-10-13T13:24:37Z) - Arbitrary point cloud upsampling via Dual Back-Projection Network [12.344557879284219]
We propose a Dual Back-Projection network for point cloud upsampling (DBPnet)
A Dual Back-Projection is formulated in an up-down-up manner for point cloud upsampling.
Experimental results show that the proposed method achieves the lowest point set matching losses.
arXiv Detail & Related papers (2023-07-18T06:11:09Z) - Grad-PU: Arbitrary-Scale Point Cloud Upsampling via Gradient Descent
with Learned Distance Functions [77.32043242988738]
We propose a new framework for accurate point cloud upsampling that supports arbitrary upsampling rates.
Our method first interpolates the low-res point cloud according to a given upsampling rate.
arXiv Detail & Related papers (2023-04-24T06:36:35Z) - Parametric Surface Constrained Upsampler Network for Point Cloud [33.033469444588086]
We introduce a novel surface regularizer into the upsampler network by forcing the neural network to learn the underlying parametric surface represented by bicubic functions and rotation functions.
These designs are integrated into two different networks for two tasks that take advantages of upsampling layers.
The state-of-the-art experimental results on both tasks demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2023-03-14T21:12:54Z) - Deep Point Cloud Simplification for High-quality Surface Reconstruction [24.29310178063052]
PCS-Net is dedicated to high-quality surface mesh reconstruction while maintaining geometric fidelity.
We propose a novel double-scale resampling module to refine the positions of the sampled points.
To further retain important shape features, an adaptive sampling strategy with a novel saliency loss is designed.
arXiv Detail & Related papers (2022-03-17T05:22:25Z) - Point Cloud Upsampling via Disentangled Refinement [86.3641957163818]
Point clouds produced by 3D scanning are often sparse, non-uniform, and noisy.
Recent upsampling approaches aim to generate a dense point set, while achieving both distribution uniformity and proximity-to-surface.
We formulate two cascaded sub-networks, a dense generator and a spatial refiner.
arXiv Detail & Related papers (2021-06-09T02:58:42Z) - SPU-Net: Self-Supervised Point Cloud Upsampling by Coarse-to-Fine
Reconstruction with Self-Projection Optimization [52.20602782690776]
It is expensive and tedious to obtain large scale paired sparse-canned point sets for training from real scanned sparse data.
We propose a self-supervised point cloud upsampling network, named SPU-Net, to capture the inherent upsampling patterns of points lying on the underlying object surface.
We conduct various experiments on both synthetic and real-scanned datasets, and the results demonstrate that we achieve comparable performance to the state-of-the-art supervised methods.
arXiv Detail & Related papers (2020-12-08T14:14:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.