Multiscale Latent-Guided Entropy Model for LiDAR Point Cloud Compression
- URL: http://arxiv.org/abs/2209.12512v1
- Date: Mon, 26 Sep 2022 08:36:11 GMT
- Title: Multiscale Latent-Guided Entropy Model for LiDAR Point Cloud Compression
- Authors: Tingyu Fan, Linyao Gao, Yiling Xu, Dong Wang and Zhu Li
- Abstract summary: The non-uniform distribution and extremely sparse nature of the LiDAR point cloud (LPC) bring significant challenges to its high-efficient compression.
This paper proposes a novel end-to-end, fully-factorized deep framework that encodes the original LPC into an octree structure and hierarchically decomposes the octree entropy model in layers.
- Score: 18.897023700334458
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The non-uniform distribution and extremely sparse nature of the LiDAR point
cloud (LPC) bring significant challenges to its high-efficient compression.
This paper proposes a novel end-to-end, fully-factorized deep framework that
encodes the original LPC into an octree structure and hierarchically decomposes
the octree entropy model in layers. The proposed framework utilizes a
hierarchical latent variable as side information to encapsulate the sibling and
ancestor dependence, which provides sufficient context information for the
modelling of point cloud distribution while enabling the parallel encoding and
decoding of octree nodes in the same layer. Besides, we propose a residual
coding framework for the compression of the latent variable, which explores the
spatial correlation of each layer by progressive downsampling, and model the
corresponding residual with a fully-factorized entropy model. Furthermore, we
propose soft addition and subtraction for residual coding to improve network
flexibility. The comprehensive experiment results on the LiDAR benchmark
SemanticKITTI and MPEG-specified dataset Ford demonstrates that our proposed
framework achieves state-of-the-art performance among all the previous LPC
frameworks. Besides, our end-to-end, fully-factorized framework is proved by
experiment to be high-parallelized and time-efficient and saves more than 99.8%
of decoding time compared to previous state-of-the-art methods on LPC
compression.
Related papers
- Hierarchical Attention Networks for Lossless Point Cloud Attribute Compression [22.234604407822673]
We propose a deep hierarchical attention context model for attribute compression of point clouds.
A simple and effective Level of Detail (LoD) structure is introduced to yield a coarse-to-fine representation.
Points within the same refinement level are encoded in parallel, sharing a common context point group.
arXiv Detail & Related papers (2025-04-01T07:14:10Z) - Contextual Compression Encoding for Large Language Models: A Novel Framework for Multi-Layered Parameter Space Pruning [0.0]
Contextual Compression.
(CCE) introduced a multi-stage encoding mechanism that dynamically restructured parameter distributions.
CCE retained linguistic expressivity and coherence, maintaining accuracy across a range of text generation and classification tasks.
arXiv Detail & Related papers (2025-02-12T11:44:19Z) - Choose Your Model Size: Any Compression by a Single Gradient Descent [9.074689052563878]
We present Any Compression via Iterative Pruning (ACIP)
ACIP is an algorithmic approach to determine a compression-performance trade-off from a single gradient descent run.
We show that ACIP seamlessly complements common quantization-based compression techniques.
arXiv Detail & Related papers (2025-02-03T18:40:58Z) - Decoupled Sparse Priors Guided Diffusion Compression Model for Point Clouds [26.32608616696905]
Lossy compression methods rely on an autoencoder to transform a point cloud into latent points for storage.
We propose a sparse priors guided method that achieves high reconstruction quality, especially at high compression ratios.
arXiv Detail & Related papers (2024-11-21T05:41:35Z) - Generalized Nested Latent Variable Models for Lossy Coding applied to Wind Turbine Scenarios [14.48369551534582]
A learning-based approach seeks to minimize the compromise between compression rate and reconstructed image quality.
A successful technique consists in introducing a deep hyperprior that operates within a 2-level nested latent variable model.
This paper extends this concept by designing a generalized L-level nested generative model with a Markov chain structure.
arXiv Detail & Related papers (2024-06-10T11:00:26Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Learning Dynamic Point Cloud Compression via Hierarchical Inter-frame
Block Matching [35.80653765524654]
3D dynamic point cloud (DPC) compression relies on mining its temporal context.
This paper proposes a learning-based DPC compression framework via hierarchical block-matching-based inter-prediction module.
arXiv Detail & Related papers (2023-05-09T11:44:13Z) - Sparsity-guided Network Design for Frame Interpolation [39.828644638174225]
We present a compression-driven network design for frame-based algorithms.
We leverage model pruning through sparsity-inducing optimization to greatly reduce the model size.
We achieve a considerable performance gain with a quarter of the size of the original AdaCoF.
arXiv Detail & Related papers (2022-09-09T23:13:25Z) - Point Cloud Compression with Sibling Context and Surface Priors [47.96018990521301]
We present a novel octree-based multi-level framework for large-scale point cloud compression.
In this framework, we propose a new entropy model that explores the hierarchical dependency in an octree.
We locally fit surfaces with a voxel-based geometry-aware module to provide geometric priors in entropy encoding.
arXiv Detail & Related papers (2022-05-02T09:13:26Z) - CAFE: Learning to Condense Dataset by Aligning Features [72.99394941348757]
We propose a novel scheme to Condense dataset by Aligning FEatures (CAFE)
At the heart of our approach is an effective strategy to align features from the real and synthetic data across various scales.
We validate the proposed CAFE across various datasets, and demonstrate that it generally outperforms the state of the art.
arXiv Detail & Related papers (2022-03-03T05:58:49Z) - Efficient Micro-Structured Weight Unification and Pruning for Neural
Network Compression [56.83861738731913]
Deep Neural Network (DNN) models are essential for practical applications, especially for resource limited devices.
Previous unstructured or structured weight pruning methods can hardly truly accelerate inference.
We propose a generalized weight unification framework at a hardware compatible micro-structured level to achieve high amount of compression and acceleration.
arXiv Detail & Related papers (2021-06-15T17:22:59Z) - MuSCLE: Multi Sweep Compression of LiDAR using Deep Entropy Models [78.93424358827528]
We present a novel compression algorithm for reducing the storage streams of LiDAR sensor data.
Our method significantly reduces the joint geometry and intensity over prior state-of-the-art LiDAR compression methods.
arXiv Detail & Related papers (2020-11-15T17:41:14Z) - A Generic Network Compression Framework for Sequential Recommender
Systems [71.81962915192022]
Sequential recommender systems (SRS) have become the key technology in capturing user's dynamic interests and generating high-quality recommendations.
We propose a compressed sequential recommendation framework, termed as CpRec, where two generic model shrinking techniques are employed.
By the extensive ablation studies, we demonstrate that the proposed CpRec can achieve up to 4$sim$8 times compression rates in real-world SRS datasets.
arXiv Detail & Related papers (2020-04-21T08:40:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.