Joint Geometry and Attribute Upsampling of Point Clouds Using
Frequency-Selective Models with Overlapped Support
- URL: http://arxiv.org/abs/2301.11630v1
- Date: Fri, 27 Jan 2023 10:20:06 GMT
- Title: Joint Geometry and Attribute Upsampling of Point Clouds Using
Frequency-Selective Models with Overlapped Support
- Authors: Viktoria Heimann and Andreas Spruck and Andr\'e Kaup
- Abstract summary: We propose Frequency-Selective Upsampling (FSU), an upsampling scheme that upsamples geometry and attribute information of point clouds jointly.
We show best performances for our proposed FSU in terms of point-to-plane error and plane-to-plane angular similarity.
- Score: 4.211128681972148
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increasing demand of capturing our environment in three-dimensions
for AR/ VR applications and autonomous driving among others, the importance of
high-resolution point clouds rises. As the capturing process is a complex task,
point cloud upsampling is often desired. We propose Frequency-Selective
Upsampling (FSU), an upsampling scheme that upsamples geometry and attribute
information of point clouds jointly in a sequential manner with overlapped
support areas. The point cloud is partitioned into blocks with overlapping
support area first. Then, a continuous frequency model is generated that
estimates the point cloud's surface locally. The model is sampled at new
positions for upsampling. In a subsequent step, another frequency model is
created that models the attribute signal. Here, knowledge from the geometry
upsampling is exploited for a simplified projection of the points in two
dimensions. The attribute model is evaluated for the upsampled geometry
positions. In our extensive evaluation, we evaluate geometry and attribute
upsampling independently and show joint results. The geometry results show best
performances for our proposed FSU in terms of point-to-plane error and
plane-to-plane angular similarity. Moreover, FSU outperforms other color
upsampling schemes by 1.9 dB in terms of color PSNR. In addition, the visual
appearance of the point clouds clearly increases with FSU.
Related papers
- Curvature Informed Furthest Point Sampling [0.0]
We introduce a reinforcement learning-based sampling algorithm that enhances furthest point sampling (FPS)
Our approach ranks points by combining FPS-derived soft ranks with curvature scores computed by a deep neural network.
We provide comprehensive ablation studies, with both qualitative and quantitative insights into the effect of each feature on performance.
arXiv Detail & Related papers (2024-11-25T23:58:38Z) - Arbitrary point cloud upsampling via Dual Back-Projection Network [12.344557879284219]
We propose a Dual Back-Projection network for point cloud upsampling (DBPnet)
A Dual Back-Projection is formulated in an up-down-up manner for point cloud upsampling.
Experimental results show that the proposed method achieves the lowest point set matching losses.
arXiv Detail & Related papers (2023-07-18T06:11:09Z) - Grad-PU: Arbitrary-Scale Point Cloud Upsampling via Gradient Descent
with Learned Distance Functions [77.32043242988738]
We propose a new framework for accurate point cloud upsampling that supports arbitrary upsampling rates.
Our method first interpolates the low-res point cloud according to a given upsampling rate.
arXiv Detail & Related papers (2023-04-24T06:36:35Z) - BIMS-PU: Bi-Directional and Multi-Scale Point Cloud Upsampling [60.257912103351394]
We develop a new point cloud upsampling pipeline called BIMS-PU.
We decompose the up/downsampling procedure into several up/downsampling sub-steps by breaking the target sampling factor into smaller factors.
We show that our method achieves superior results to state-of-the-art approaches.
arXiv Detail & Related papers (2022-06-25T13:13:37Z) - Self-Supervised Arbitrary-Scale Point Clouds Upsampling via Implicit
Neural Representation [79.60988242843437]
We propose a novel approach that achieves self-supervised and magnification-flexible point clouds upsampling simultaneously.
Experimental results demonstrate that our self-supervised learning based scheme achieves competitive or even better performance than supervised learning based state-of-the-art methods.
arXiv Detail & Related papers (2022-04-18T07:18:25Z) - PUFA-GAN: A Frequency-Aware Generative Adversarial Network for 3D Point
Cloud Upsampling [56.463507980857216]
We propose a generative adversarial network for point cloud upsampling.
It can make the upsampled points evenly distributed on the underlying surface but also efficiently generate clean high frequency regions.
arXiv Detail & Related papers (2022-03-02T07:47:46Z) - PU-Flow: a Point Cloud Upsampling Networkwith Normalizing Flows [58.96306192736593]
We present PU-Flow, which incorporates normalizing flows and feature techniques to produce dense points uniformly distributed on the underlying surface.
Specifically, we formulate the upsampling process as point in a latent space, where the weights are adaptively learned from local geometric context.
We show that our method outperforms state-of-the-art deep learning-based approaches in terms of reconstruction quality, proximity-to-surface accuracy, and computation efficiency.
arXiv Detail & Related papers (2021-07-13T07:45:48Z) - SPU-Net: Self-Supervised Point Cloud Upsampling by Coarse-to-Fine
Reconstruction with Self-Projection Optimization [52.20602782690776]
It is expensive and tedious to obtain large scale paired sparse-canned point sets for training from real scanned sparse data.
We propose a self-supervised point cloud upsampling network, named SPU-Net, to capture the inherent upsampling patterns of points lying on the underlying object surface.
We conduct various experiments on both synthetic and real-scanned datasets, and the results demonstrate that we achieve comparable performance to the state-of-the-art supervised methods.
arXiv Detail & Related papers (2020-12-08T14:14:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.