Efficient and Generic Point Model for Lossless Point Cloud Attribute Compression
- URL: http://arxiv.org/abs/2404.06936v1
- Date: Wed, 10 Apr 2024 11:40:02 GMT
- Title: Efficient and Generic Point Model for Lossless Point Cloud Attribute Compression
- Authors: Kang You, Pan Gao, Zhan Ma,
- Abstract summary: PoLoPCAC is an efficient and generic PCAC method that achieves high compression efficiency and strong generalizability simultaneously.
Our method can be instantly deployed once trained on a Synthetic 2k-ShapeNet dataset.
Experiments show that our method can enjoy continuous bit-rate reduction over the latest G-PCCv23 on various datasets.
- Score: 28.316347464011056
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The past several years have witnessed the emergence of learned point cloud compression (PCC) techniques. However, current learning-based lossless point cloud attribute compression (PCAC) methods either suffer from high computational complexity or deteriorated compression performance. Moreover, the significant variations in point cloud scale and sparsity encountered in real-world applications make developing an all-in-one neural model a challenging task. In this paper, we propose PoLoPCAC, an efficient and generic lossless PCAC method that achieves high compression efficiency and strong generalizability simultaneously. We formulate lossless PCAC as the task of inferring explicit distributions of attributes from group-wise autoregressive priors. A progressive random grouping strategy is first devised to efficiently resolve the point cloud into groups, and then the attributes of each group are modeled sequentially from accumulated antecedents. A locality-aware attention mechanism is utilized to exploit prior knowledge from context windows in parallel. Since our method directly operates on points, it can naturally avoids distortion caused by voxelization, and can be executed on point clouds with arbitrary scale and density. Experiments show that our method can be instantly deployed once trained on a Synthetic 2k-ShapeNet dataset while enjoying continuous bit-rate reduction over the latest G-PCCv23 on various datasets (ShapeNet, ScanNet, MVUB, 8iVFB). Meanwhile, our method reports shorter coding time than G-PCCv23 on the majority of sequences with a lightweight model size (2.6MB), which is highly attractive for practical applications. Dataset, code and trained model are available at https://github.com/I2-Multimedia-Lab/PoLoPCAC.
Related papers
- Att2CPC: Attention-Guided Lossy Attribute Compression of Point Clouds [18.244200436103156]
We propose an efficient attention-based method for lossy compression of point cloud attributes leveraging on an autoencoder architecture.
Experiments show that our method achieves an average improvement of 1.15 dB and 2.13 dB in BD-PSNR of Y channel and YUV channel, respectively.
arXiv Detail & Related papers (2024-10-23T12:32:21Z) - Point Cloud Compression with Bits-back Coding [32.9521748764196]
This paper specializes in using a deep learning-based probabilistic model to estimate the Shannon's entropy of the point cloud information.
Once the entropy of the point cloud dataset is estimated, we use the learned CVAE model to compress the geometric attributes of the point clouds.
The novelty of our method with bits-back coding specializes in utilizing the learned latent variable model of the CVAE to compress the point cloud data.
arXiv Detail & Related papers (2024-10-09T06:34:48Z) - LoRC: Low-Rank Compression for LLMs KV Cache with a Progressive Compression Strategy [59.1298692559785]
Key-Value ( KV) cache is crucial component in serving transformer-based autoregressive large language models (LLMs)
Existing approaches to mitigate this issue include: (1) efficient attention variants integrated in upcycling stages; (2) KV cache compression at test time; and (3) KV cache compression at test time.
We propose a low-rank approximation of KV weight matrices, allowing plug-in integration with existing transformer-based LLMs without model retraining.
Our method is designed to function without model tuning in upcycling stages or task-specific profiling in test stages.
arXiv Detail & Related papers (2024-10-04T03:10:53Z) - SPAC: Sampling-based Progressive Attribute Compression for Dense Point Clouds [51.313922535437726]
We propose an end-to-end compression method for dense point clouds.
The proposed method combines a frequency sampling module, an adaptive scale feature extraction module with geometry assistance, and a global hyperprior entropy model.
arXiv Detail & Related papers (2024-09-16T13:59:43Z) - Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud Models [64.49254199311137]
We propose a novel Instance-aware Dynamic Prompt Tuning (IDPT) strategy for pre-trained point cloud models.
The essence of IDPT is to develop a dynamic prompt generation module to perceive semantic prior features of each point cloud instance.
In experiments, IDPT outperforms full fine-tuning in most tasks with a mere 7% of the trainable parameters.
arXiv Detail & Related papers (2023-04-14T16:03:09Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - Deep probabilistic model for lossless scalable point cloud attribute
compression [2.2559617939136505]
We build an end-to-end point cloud attribute coding method (MNeT) that progressively projects the attributes onto multiscale latent spaces.
We validate our method on a set of point clouds from MVUB and MPEG and show that our method outperforms recently proposed methods and on par with the latest G-PCC version 14.
arXiv Detail & Related papers (2023-03-11T23:39:30Z) - ECM-OPCC: Efficient Context Model for Octree-based Point Cloud
Compression [6.509720419113212]
We propose a sufficient yet efficient context model and design an efficient deep learning for point clouds.
Specifically, we first propose a window-constrained multi-group coding strategy to exploit the autoregressive context.
We also propose a dual transformer architecture to utilize the dependency of current node on its ancestors and siblings.
arXiv Detail & Related papers (2022-11-20T09:20:32Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - An Information Theory-inspired Strategy for Automatic Network Pruning [88.51235160841377]
Deep convolution neural networks are well known to be compressed on devices with resource constraints.
Most existing network pruning methods require laborious human efforts and prohibitive computation resources.
We propose an information theory-inspired strategy for automatic model compression.
arXiv Detail & Related papers (2021-08-19T07:03:22Z) - Multiscale Point Cloud Geometry Compression [29.605320327889142]
We propose a multiscale-to-end learning framework which hierarchically reconstructs the 3D Point Cloud Geometry.
The framework is developed on top of a sparse convolution based autoencoder for point cloud compression and reconstruction.
arXiv Detail & Related papers (2020-11-07T16:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.