Attribute Artifacts Removal for Geometry-based Point Cloud Compression
- URL: http://arxiv.org/abs/2112.00560v1
- Date: Wed, 1 Dec 2021 15:21:06 GMT
- Title: Attribute Artifacts Removal for Geometry-based Point Cloud Compression
- Authors: Xihua Sheng, Li Li, Dong Liu, Zhiwei Xiong
- Abstract summary: Geometry-based point cloud compression (G-PCC) can achieve remarkable compression efficiency for point clouds.
It still leads to serious attribute compression artifacts, especially under low scenarios.
We propose a Multi-Scale Graph Attention Network (MSGAT) to remove the artifacts of point cloud attributes.
- Score: 43.60640890971367
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Geometry-based point cloud compression (G-PCC) can achieve remarkable
compression efficiency for point clouds. However, it still leads to serious
attribute compression artifacts, especially under low bitrate scenarios. In
this paper, we propose a Multi-Scale Graph Attention Network (MS-GAT) to remove
the artifacts of point cloud attributes compressed by G-PCC. We first construct
a graph based on point cloud geometry coordinates and then use the Chebyshev
graph convolutions to extract features of point cloud attributes. Considering
that one point may be correlated with points both near and far away from it, we
propose a multi-scale scheme to capture the short and long range correlations
between the current point and its neighboring and distant points. To address
the problem that various points may have different degrees of artifacts caused
by adaptive quantization, we introduce the quantization step per point as an
extra input to the proposed network. We also incorporate a graph attentional
layer into the network to pay special attention to the points with more
attribute artifacts. To the best of our knowledge, this is the first attribute
artifacts removal method for G-PCC. We validate the effectiveness of our method
over various point clouds. Experimental results show that our proposed method
achieves an average of 9.28% BD-rate reduction. In addition, our approach
achieves some performance improvements for the downstream point cloud semantic
segmentation task.
Related papers
- Att2CPC: Attention-Guided Lossy Attribute Compression of Point Clouds [18.244200436103156]
We propose an efficient attention-based method for lossy compression of point cloud attributes leveraging on an autoencoder architecture.
Experiments show that our method achieves an average improvement of 1.15 dB and 2.13 dB in BD-PSNR of Y channel and YUV channel, respectively.
arXiv Detail & Related papers (2024-10-23T12:32:21Z) - P2P-Bridge: Diffusion Bridges for 3D Point Cloud Denoising [81.92854168911704]
We tackle the task of point cloud denoising through a novel framework that adapts Diffusion Schr"odinger bridges to points clouds.
Experiments on object datasets show that P2P-Bridge achieves significant improvements over existing methods.
arXiv Detail & Related papers (2024-08-29T08:00:07Z) - Point Cloud Pre-training with Diffusion Models [62.12279263217138]
We propose a novel pre-training method called Point cloud Diffusion pre-training (PointDif)
PointDif achieves substantial improvement across various real-world datasets for diverse downstream tasks such as classification, segmentation and detection.
arXiv Detail & Related papers (2023-11-25T08:10:05Z) - GQE-Net: A Graph-based Quality Enhancement Network for Point Cloud Color
Attribute [51.4803148196217]
We propose a graph-based quality enhancement network (GQE-Net) to reduce color distortion in point clouds.
GQE-Net uses geometry information as an auxiliary input and graph convolution blocks to extract local features efficiently.
Experimental results show that our method achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-03-24T02:33:45Z) - Lossless Point Cloud Geometry and Attribute Compression Using a Learned Conditional Probability Model [2.670322123407995]
We present an efficient point cloud compression method that uses tensor-based deep neural networks to learn point cloud geometry and color probability.
Our method represents a point cloud with both occupancy feature and three features at different bit depths in a unified representation.
arXiv Detail & Related papers (2023-03-11T23:50:02Z) - Shrinking unit: a Graph Convolution-Based Unit for CNN-like 3D Point
Cloud Feature Extractors [0.0]
We argue that a lack of inspiration from the image domain might be the primary cause of such a gap.
We propose a graph convolution-based unit, dubbed Shrinking unit, that can be stacked vertically and horizontally for the design of CNN-like 3D point cloud feature extractors.
arXiv Detail & Related papers (2022-09-26T15:28:31Z) - GRASP-Net: Geometric Residual Analysis and Synthesis for Point Cloud
Compression [16.98171403698783]
We propose a heterogeneous approach with deep learning for lossy point cloud geometry compression.
Specifically, a point-based network is applied to convert the erratic local details to latent features residing on the coarse point cloud.
arXiv Detail & Related papers (2022-09-09T17:09:02Z) - Density-preserving Deep Point Cloud Compression [72.0703956923403]
We propose a novel deep point cloud compression method that preserves local density information.
Our method works in an auto-encoder fashion: the encoder downsamples the points and learns point-wise features, while the decoder upsamples the points using these features.
arXiv Detail & Related papers (2022-04-27T03:42:15Z) - GPCO: An Unsupervised Green Point Cloud Odometry Method [64.86292006892093]
A lightweight point cloud odometry solution is proposed and named the green point cloud odometry (GPCO) method.
GPCO is an unsupervised learning method that predicts object motion by matching features of consecutive point cloud scans.
It is observed that GPCO outperforms benchmarking deep learning methods in accuracy while it has a significantly smaller model size and less training time.
arXiv Detail & Related papers (2021-12-08T00:24:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.