Deep-JGAC: End-to-End Deep Joint Geometry and Attribute Compression for Dense Colored Point Clouds
- URL: http://arxiv.org/abs/2502.17939v1
- Date: Tue, 25 Feb 2025 08:01:57 GMT
- Title: Deep-JGAC: End-to-End Deep Joint Geometry and Attribute Compression for Dense Colored Point Clouds
- Authors: Yun Zhang, Zixi Guo, Linwei Zhu, C. -C. Jay Kuo,
- Abstract summary: We propose an end-to-end Deep Joint Geometry and Attribute point cloud Compression framework.<n>It exploits the correlation between the geometry and attribute for high compression efficiency.<n>The proposed Deep-JGAC achieves an average of 82.96%, 36.46%, 41.72%, and 31.16% bit-rate reductions.
- Score: 32.891169081810574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Colored point cloud becomes a fundamental representation in the realm of 3D vision. Effective Point Cloud Compression (PCC) is urgently needed due to huge amount of data. In this paper, we propose an end-to-end Deep Joint Geometry and Attribute point cloud Compression (Deep-JGAC) framework for dense colored point clouds, which exploits the correlation between the geometry and attribute for high compression efficiency. Firstly, we propose a flexible Deep-JGAC framework, where the geometry and attribute sub-encoders are compatible to either learning or non-learning based geometry and attribute encoders. Secondly, we propose an attribute-assisted deep geometry encoder that enhances the geometry latent representation with the help of attribute, where the geometry decoding remains unchanged. Moreover, Attribute Information Fusion Module (AIFM) is proposed to fuse attribute information in geometry coding. Thirdly, to solve the mismatch between the point cloud geometry and attribute caused by the geometry compression distortion, we present an optimized re-colorization module to attach the attribute to the geometrically distorted point cloud for attribute coding. It enhances the colorization and lowers the computational complexity. Extensive experimental results demonstrate that in terms of the geometry quality metric D1-PSNR, the proposed Deep-JGAC achieves an average of 82.96%, 36.46%, 41.72%, and 31.16% bit-rate reductions as compared to the state-of-the-art G-PCC, V-PCC, GRASP, and PCGCv2, respectively. In terms of perceptual joint quality metric MS-GraphSIM, the proposed Deep-JGAC achieves an average of 48.72%, 14.67%, and 57.14% bit-rate reductions compared to the G-PCC, V-PCC, and IT-DL-PCC, respectively. The encoding/decoding time costs are also reduced by 94.29%/24.70%, and 96.75%/91.02% on average as compared with the V-PCC and IT-DL-PCC.
Related papers
- High Efficiency Wiener Filter-based Point Cloud Quality Enhancement for MPEG G-PCC [23.8642501868336]
Point clouds directly record the geometry and attributes of scenes or objects by a large number of points.
geometry-based point cloud compression (G-PCC) standard for both static and dynamic point clouds.
We propose a high efficiency Wiener filter that can be integrated into the encoder and decoder pipeline of G-PCC.
arXiv Detail & Related papers (2025-03-21T18:24:58Z) - PCE-GAN: A Generative Adversarial Network for Point Cloud Attribute Quality Enhancement based on Optimal Transport [56.56430888985025]
We propose a generative adversarial network for point cloud quality enhancement (PCE-GAN)
The generator consists of a local feature extraction (LFE) unit, a global spatial correlation (GSC) unit and a feature squeeze unit.
The discriminator computes the deviation between the probability distributions of the enhanced point cloud and the original point cloud, guiding the generator to achieve high quality reconstruction.
arXiv Detail & Related papers (2025-02-26T07:34:33Z) - Implicit Neural Compression of Point Clouds [58.45774938982386]
NeRC$textbf3$ is a novel point cloud compression framework leveraging implicit neural representations to handle both geometry and attributes.<n>For dynamic point clouds, 4D-NeRC$textbf3$ demonstrates superior geometry compression compared to state-of-the-art G-PCC and V-PCC standards.
arXiv Detail & Related papers (2024-12-11T03:22:00Z) - Rendering-Oriented 3D Point Cloud Attribute Compression using Sparse Tensor-based Transformer [52.40992954884257]
3D visualization techniques have fundamentally transformed how we interact with digital content.
Massive data size of point clouds presents significant challenges in data compression.
We propose an end-to-end deep learning framework that seamlessly integrates PCAC with differentiable rendering.
arXiv Detail & Related papers (2024-11-12T16:12:51Z) - Decoupling Fine Detail and Global Geometry for Compressed Depth Map Super-Resolution [55.9977636042469]
We propose a novel framework, termed geometry-decoupled network (GDNet), for compressed depth map super-resolution.
It decouples the high-quality depth map reconstruction process by handling global and detailed geometric features separately.
Our solution significantly outperforms current methods in terms of geometric consistency and detail recovery.
arXiv Detail & Related papers (2024-11-05T16:37:30Z) - Att2CPC: Attention-Guided Lossy Attribute Compression of Point Clouds [18.244200436103156]
We propose an efficient attention-based method for lossy compression of point cloud attributes leveraging on an autoencoder architecture.
Experiments show that our method achieves an average improvement of 1.15 dB and 2.13 dB in BD-PSNR of Y channel and YUV channel, respectively.
arXiv Detail & Related papers (2024-10-23T12:32:21Z) - Hierarchical Prior-based Super Resolution for Point Cloud Geometry
Compression [39.052583172727324]
The Geometry-based Point Cloud Compression (G-PCC) has been developed by the Moving Picture Experts Group to compress point clouds.
This paper proposes a hierarchical prior-based super resolution method for point cloud geometry compression.
arXiv Detail & Related papers (2024-02-17T11:15:38Z) - Geometric Prior Based Deep Human Point Cloud Geometry Compression [67.49785946369055]
We leverage the human geometric prior in geometry redundancy removal of point clouds.
We can envisage high-resolution human point clouds as a combination of geometric priors and structural deviations.
The proposed framework can operate in a play-and-plug fashion with existing learning based point cloud compression methods.
arXiv Detail & Related papers (2023-05-02T10:35:20Z) - GQE-Net: A Graph-based Quality Enhancement Network for Point Cloud Color
Attribute [51.4803148196217]
We propose a graph-based quality enhancement network (GQE-Net) to reduce color distortion in point clouds.
GQE-Net uses geometry information as an auxiliary input and graph convolution blocks to extract local features efficiently.
Experimental results show that our method achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-03-24T02:33:45Z) - Multiscale Point Cloud Geometry Compression [29.605320327889142]
We propose a multiscale-to-end learning framework which hierarchically reconstructs the 3D Point Cloud Geometry.
The framework is developed on top of a sparse convolution based autoencoder for point cloud compression and reconstruction.
arXiv Detail & Related papers (2020-11-07T16:11:16Z) - Improved Deep Point Cloud Geometry Compression [10.936043362876651]
We propose a set of contributions to improve deep point cloud compression.
An optimal combination of the proposed improvements achieves BD-PSNR gains over G-PCC trisoup and octree of 5.50 (6.48) dB and 6.84 (5.95) dB, respectively.
arXiv Detail & Related papers (2020-06-16T10:03:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.