4DAC: Learning Attribute Compression for Dynamic Point Clouds
- URL: http://arxiv.org/abs/2204.11723v1
- Date: Mon, 25 Apr 2022 15:30:06 GMT
- Title: 4DAC: Learning Attribute Compression for Dynamic Point Clouds
- Authors: Guangchi Fang, Qingyong Hu, Yiling Xu, Yulan Guo
- Abstract summary: We study the attribute (e.g., color) compression of dynamic point clouds and present a learning-based framework, termed 4DAC.
To reduce temporal redundancy within data, we first build the 3D motion estimation and motion compensation modules with deep neural networks.
In addition, we also propose a deep conditional entropy model to estimate the probability distribution of the transformed coefficients.
- Score: 37.447460254690135
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the development of the 3D data acquisition facilities, the increasing
scale of acquired 3D point clouds poses a challenge to the existing data
compression techniques. Although promising performance has been achieved in
static point cloud compression, it remains under-explored and challenging to
leverage temporal correlations within a point cloud sequence for effective
dynamic point cloud compression. In this paper, we study the attribute (e.g.,
color) compression of dynamic point clouds and present a learning-based
framework, termed 4DAC. To reduce temporal redundancy within data, we first
build the 3D motion estimation and motion compensation modules with deep neural
networks. Then, the attribute residuals produced by the motion compensation
component are encoded by the region adaptive hierarchical transform into
residual coefficients. In addition, we also propose a deep conditional entropy
model to estimate the probability distribution of the transformed coefficients,
by incorporating temporal context from consecutive point clouds and the motion
estimation/compensation modules. Finally, the data stream is losslessly entropy
coded with the predicted distribution. Extensive experiments on several public
datasets demonstrate the superior compression performance of the proposed
approach.
Related papers
- Dynamic 3D Point Cloud Sequences as 2D Videos [81.46246338686478]
3D point cloud sequences serve as one of the most common and practical representation modalities of real-world environments.
We propose a novel generic representation called textitStructured Point Cloud Videos (SPCVs)
SPCVs re-organizes a point cloud sequence as a 2D video with spatial smoothness and temporal consistency, where the pixel values correspond to the 3D coordinates of points.
arXiv Detail & Related papers (2024-03-02T08:18:57Z) - Point Cloud Pre-training with Diffusion Models [62.12279263217138]
We propose a novel pre-training method called Point cloud Diffusion pre-training (PointDif)
PointDif achieves substantial improvement across various real-world datasets for diverse downstream tasks such as classification, segmentation and detection.
arXiv Detail & Related papers (2023-11-25T08:10:05Z) - Learning Dynamic Point Cloud Compression via Hierarchical Inter-frame
Block Matching [35.80653765524654]
3D dynamic point cloud (DPC) compression relies on mining its temporal context.
This paper proposes a learning-based DPC compression framework via hierarchical block-matching-based inter-prediction module.
arXiv Detail & Related papers (2023-05-09T11:44:13Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - D-DPCC: Deep Dynamic Point Cloud Compression via 3D Motion Prediction [18.897023700334458]
This paper proposes a novel 3D sparse convolution-based Deep Dynamic Point Cloud Compression network.
It compensates and compress the DPC geometry with 3D motion estimation and motion compensation in the feature space.
The experimental result shows that the proposed D-DPCC framework achieves an average 76% BD-Rate (Bjontegaard Delta Rate) gains against state-of-the-art Video-based Point Cloud Compression (V-PCC) v13 in inter mode.
arXiv Detail & Related papers (2022-05-02T18:10:45Z) - 3DAC: Learning Attribute Compression for Point Clouds [35.78404985164711]
We study the problem of attribute compression for large-scale unstructured 3D point clouds.
We introduce a deep compression network, termed 3DAC, to explicitly compress the attributes of 3D point clouds.
arXiv Detail & Related papers (2022-03-17T09:42:36Z) - Variable Rate Compression for Raw 3D Point Clouds [5.107705550575662]
We propose a novel variable rate deep compression architecture that operates on raw 3D point cloud data.
Our network is capable of explicitly processing point clouds and generating a compressed description.
arXiv Detail & Related papers (2022-02-28T15:15:39Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.