OctSqueeze: Octree-Structured Entropy Model for LiDAR Compression
- URL: http://arxiv.org/abs/2005.07178v2
- Date: Fri, 8 Jan 2021 22:27:07 GMT
- Title: OctSqueeze: Octree-Structured Entropy Model for LiDAR Compression
- Authors: Lila Huang, Shenlong Wang, Kelvin Wong, Jerry Liu, Raquel Urtasun
- Abstract summary: We present a novel deep compression algorithm to reduce the memory footprint of LiDAR point clouds.
Our method exploits the sparsity and structural redundancy between points to reduce the memory footprint.
Our algorithm can be used to reduce the onboard and offboard storage of LiDAR points for applications such as self-driving cars.
- Score: 77.8842824702423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel deep compression algorithm to reduce the memory footprint
of LiDAR point clouds. Our method exploits the sparsity and structural
redundancy between points to reduce the bitrate. Towards this goal, we first
encode the LiDAR points into an octree, a data-efficient structure suitable for
sparse point clouds. We then design a tree-structured conditional entropy model
that models the probabilities of the octree symbols to encode the octree into a
compact bitstream. We validate the effectiveness of our method over two
large-scale datasets. The results demonstrate that our approach reduces the
bitrate by 10-20% at the same reconstruction quality, compared to the previous
state-of-the-art. Importantly, we also show that for the same bitrate, our
approach outperforms other compression algorithms when performing downstream 3D
segmentation and detection tasks using compressed representations. Our
algorithm can be used to reduce the onboard and offboard storage of LiDAR
points for applications such as self-driving cars, where a single vehicle
captures 84 billion points per day
Related papers
- Point Cloud Compression with Bits-back Coding [32.9521748764196]
This paper specializes in using a deep learning-based probabilistic model to estimate the Shannon's entropy of the point cloud information.
Once the entropy of the point cloud dataset is estimated, we use the learned CVAE model to compress the geometric attributes of the point clouds.
The novelty of our method with bits-back coding specializes in utilizing the learned latent variable model of the CVAE to compress the point cloud data.
arXiv Detail & Related papers (2024-10-09T06:34:48Z) - STAT: Shrinking Transformers After Training [72.0726371426711]
We present STAT, a simple algorithm to prune transformer models without any fine-tuning.
STAT eliminates both attention heads and neurons from the network, while preserving accuracy by calculating a correction to the weights of the next layer.
Our entire algorithm takes minutes to compress BERT, and less than three hours to compress models with 7B parameters using a single GPU.
arXiv Detail & Related papers (2024-05-29T22:59:11Z) - Compression of Structured Data with Autoencoders: Provable Benefit of
Nonlinearities and Depth [83.15263499262824]
We prove that gradient descent converges to a solution that completely disregards the sparse structure of the input.
We show how to improve upon Gaussian performance for the compression of sparse data by adding a denoising function to a shallow architecture.
We validate our findings on image datasets, such as CIFAR-10 and MNIST.
arXiv Detail & Related papers (2024-02-07T16:32:29Z) - Improving Dual-Encoder Training through Dynamic Indexes for Negative
Mining [61.09807522366773]
We introduce an algorithm that approximates the softmax with provable bounds and that dynamically maintains the tree.
In our study on datasets with over twenty million targets, our approach cuts error by half in relation to oracle brute-force negative mining.
arXiv Detail & Related papers (2023-03-27T15:18:32Z) - ECM-OPCC: Efficient Context Model for Octree-based Point Cloud
Compression [6.509720419113212]
We propose a sufficient yet efficient context model and design an efficient deep learning for point clouds.
Specifically, we first propose a window-constrained multi-group coding strategy to exploit the autoregressive context.
We also propose a dual transformer architecture to utilize the dependency of current node on its ancestors and siblings.
arXiv Detail & Related papers (2022-11-20T09:20:32Z) - Reducing Redundancy in the Bottleneck Representation of the Autoencoders [98.78384185493624]
Autoencoders are a type of unsupervised neural networks, which can be used to solve various tasks.
We propose a scheme to explicitly penalize feature redundancies in the bottleneck representation.
We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST.
arXiv Detail & Related papers (2022-02-09T18:48:02Z) - DeepCompress: Efficient Point Cloud Geometry Compression [1.808877001896346]
We propose a more efficient deep learning-based encoder architecture for point clouds compression.
We show that incorporating the learned activation function from Efficient Neural Image Compression (CENIC) yields dramatic gains in efficiency and performance.
Our proposed modifications outperform the baseline approaches by a small margin in terms of Bjontegard delta rate and PSNR values.
arXiv Detail & Related papers (2021-06-02T23:18:11Z) - MuSCLE: Multi Sweep Compression of LiDAR using Deep Entropy Models [78.93424358827528]
We present a novel compression algorithm for reducing the storage streams of LiDAR sensor data.
Our method significantly reduces the joint geometry and intensity over prior state-of-the-art LiDAR compression methods.
arXiv Detail & Related papers (2020-11-15T17:41:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.