4DSR-GCN: 4D Video Point Cloud Upsampling using Graph Convolutional
Networks
- URL: http://arxiv.org/abs/2306.01081v1
- Date: Thu, 1 Jun 2023 18:43:16 GMT
- Title: 4DSR-GCN: 4D Video Point Cloud Upsampling using Graph Convolutional
Networks
- Authors: Lorenzo Berlincioni, Stefano Berretti, Marco Bertini, Alberto Del
Bimbo
- Abstract summary: We propose a new solution for upscaling and restoration of time-varying 3D video point clouds after they have been compressed.
Our model consists of a specifically designed Graph Convolutional Network (GCN) that combines Dynamic Edge Convolution and Graph Attention Networks.
- Score: 29.615723135027096
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Time varying sequences of 3D point clouds, or 4D point clouds, are now being
acquired at an increasing pace in several applications (e.g., LiDAR in
autonomous or assisted driving). In many cases, such volume of data is
transmitted, thus requiring that proper compression tools are applied to either
reduce the resolution or the bandwidth. In this paper, we propose a new
solution for upscaling and restoration of time-varying 3D video point clouds
after they have been heavily compressed. In consideration of recent growing
relevance of 3D applications, %We focused on a model allowing user-side
upscaling and artifact removal for 3D video point clouds, a real-time stream of
which would require . Our model consists of a specifically designed Graph
Convolutional Network (GCN) that combines Dynamic Edge Convolution and Graph
Attention Networks for feature aggregation in a Generative Adversarial setting.
By taking inspiration PointNet++, We present a different way to sample dense
point clouds with the intent to make these modules work in synergy to provide
each node enough features about its neighbourhood in order to later on generate
new vertices. Compared to other solutions in the literature that address the
same task, our proposed model is capable of obtaining comparable results in
terms of quality of the reconstruction, while using a substantially lower
number of parameters (about 300KB), making our solution deployable in edge
computing devices such as LiDAR.
Related papers
- StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - GQE-Net: A Graph-based Quality Enhancement Network for Point Cloud Color
Attribute [51.4803148196217]
We propose a graph-based quality enhancement network (GQE-Net) to reduce color distortion in point clouds.
GQE-Net uses geometry information as an auxiliary input and graph convolution blocks to extract local features efficiently.
Experimental results show that our method achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-03-24T02:33:45Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z) - ParaNet: Deep Regular Representation for 3D Point Clouds [62.81379889095186]
ParaNet is a novel end-to-end deep learning framework for representing 3D point clouds.
It converts an irregular 3D point cloud into a regular 2D color image, named point geometry image (PGI)
In contrast to conventional regular representation modalities based on multi-view projection and voxelization, the proposed representation is differentiable and reversible.
arXiv Detail & Related papers (2020-12-05T13:19:55Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z) - GRNet: Gridding Residual Network for Dense Point Cloud Completion [54.43648460932248]
Estimating the complete 3D point cloud from an incomplete one is a key problem in many vision and robotics applications.
We propose a novel Gridding Residual Network (GRNet) for point cloud completion.
Experimental results indicate that the proposed GRNet performs favorably against state-of-the-art methods on the ShapeNet, Completion3D, and KITTI benchmarks.
arXiv Detail & Related papers (2020-06-06T02:46:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.