ELiC: Efficient LiDAR Geometry Compression via Cross-Bit-depth Feature Propagation and Bag-of-Encoders
- URL: http://arxiv.org/abs/2511.14070v1
- Date: Tue, 18 Nov 2025 02:58:16 GMT
- Title: ELiC: Efficient LiDAR Geometry Compression via Cross-Bit-depth Feature Propagation and Bag-of-Encoders
- Authors: Junsik Kim, Gun Bang, Soowoong Kim,
- Abstract summary: LiDAR compression encodes voxel occupancies from low to high bit-depths.<n>We present ELiC, a real-time framework that combines cross-bit-depth feature propagation, a Bag-of-Encoders selection scheme, and a Morton-order-preserving hierarchy.
- Score: 6.993324496891383
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hierarchical LiDAR geometry compression encodes voxel occupancies from low to high bit-depths, yet prior methods treat each depth independently and re-estimate local context from coordinates at every level, limiting compression efficiency. We present ELiC, a real-time framework that combines cross-bit-depth feature propagation, a Bag-of-Encoders (BoE) selection scheme, and a Morton-order-preserving hierarchy. Cross-bit-depth propagation reuses features extracted at denser, lower depths to support prediction at sparser, higher depths. BoE selects, per depth, the most suitable coding network from a small pool, adapting capacity to observed occupancy statistics without training a separate model for each level. The Morton hierarchy maintains global Z-order across depth transitions, eliminating per-level sorting and reducing latency. Together these components improve entropy modeling and computation efficiency, yielding state-of-the-art compression at real-time throughput on Ford and SemanticKITTI. Code and models will be released upon publication.
Related papers
- Rethinking Autoregressive Models for Lossless Image Compression via Hierarchical Parallelism and Progressive Adaptation [75.58269386927076]
Autoregressive (AR) models are often dismissed as impractical due to prohibitive computational cost.<n>This work re-thinks this paradigm, introducing a framework built on hierarchical parallelism and progressive adaptation.<n> Experiments on diverse datasets (natural, satellite, medical) validate that our method achieves new state-of-the-art compression.
arXiv Detail & Related papers (2025-11-14T06:27:58Z) - Re-Densification Meets Cross-Scale Propagation: Real-Time Neural Compression of LiDAR Point Clouds [83.39320394656855]
LiDAR point clouds are fundamental to various applications, yet high-precision scans incur substantial storage and transmission overhead.<n>Existing methods typically convert unordered points into hierarchical octree or voxel structures for dense-to-sparse predictive coding.<n>Our framework comprises two lightweight modules. First, the Geometry Re-Densification Module re-densifies encoded sparse geometry, extracts features at denser scale, and then re-sparsifies the features for predictive coding.
arXiv Detail & Related papers (2025-08-28T06:36:10Z) - A mini-batch training strategy for deep subspace clustering networks [6.517972913340111]
Mini-batch training is a cornerstone of modern deep learning, offering computational efficiency and scalability for training complex architectures.<n>In this work, we introduce a mini-batch training strategy for deep subspace clustering by integrating a memory bank that preserves global feature representations.<n>Our approach enables scalable training of deep architectures for subspace clustering with high-resolution images, overcoming previous limitations.
arXiv Detail & Related papers (2025-07-26T11:44:39Z) - Hierarchical Attention Networks for Lossless Point Cloud Attribute Compression [22.234604407822673]
We propose a deep hierarchical attention context model for attribute compression of point clouds.<n>A simple and effective Level of Detail (LoD) structure is introduced to yield a coarse-to-fine representation.<n>Points within the same refinement level are encoded in parallel, sharing a common context point group.
arXiv Detail & Related papers (2025-04-01T07:14:10Z) - DiffCom: Decoupled Sparse Priors Guided Diffusion Compression for Point Clouds [54.96190721255167]
Lossy compression relies on an autoencoder to transform a point cloud into latent points for storage.<n>We propose a diffusion-based framework guided by sparse priors that achieves high reconstruction quality, especially at lows.
arXiv Detail & Related papers (2024-11-21T05:41:35Z) - Decoupling Fine Detail and Global Geometry for Compressed Depth Map Super-Resolution [55.9977636042469]
We propose a novel framework, termed geometry-decoupled network (GDNet), for compressed depth map super-resolution.<n>It decouples the high-quality depth map reconstruction process by handling global and detailed geometric features separately.<n>Our solution significantly outperforms current methods in terms of geometric consistency and detail recovery.
arXiv Detail & Related papers (2024-11-05T16:37:30Z) - Fast Point Cloud Geometry Compression with Context-based Residual Coding and INR-based Refinement [19.575833741231953]
We use the KNN method to determine the neighborhoods of raw surface points.<n>A conditional probability model is adaptive to local geometry, leading to significant rate reduction.<n>We incorporate an implicit neural representation into the refinement layer, allowing the decoder to sample points on the underlying surface at arbitrary densities.
arXiv Detail & Related papers (2024-08-06T05:24:06Z) - Multiscale Latent-Guided Entropy Model for LiDAR Point Cloud Compression [18.897023700334458]
The non-uniform distribution and extremely sparse nature of the LiDAR point cloud (LPC) bring significant challenges to its high-efficient compression.
This paper proposes a novel end-to-end, fully-factorized deep framework that encodes the original LPC into an octree structure and hierarchically decomposes the octree entropy model in layers.
arXiv Detail & Related papers (2022-09-26T08:36:11Z) - Compressing Neural Networks: Towards Determining the Optimal Layer-wise
Decomposition [62.41259783906452]
We present a novel global compression framework for deep neural networks.
It automatically analyzes each layer to identify the optimal per-layer compression ratio.
Our results open up new avenues for future research into the global performance-size trade-offs of modern neural networks.
arXiv Detail & Related papers (2021-07-23T20:01:30Z) - PowerGossip: Practical Low-Rank Communication Compression in
Decentralized Deep Learning [62.440827696638664]
We introduce a simple algorithm that directly compresses the model differences between neighboring workers.
Inspired by the PowerSGD for centralized deep learning, this algorithm uses power steps to maximize the information transferred per bit.
arXiv Detail & Related papers (2020-08-04T09:14:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.