Sparse Tensor-based Multiscale Representation for Point Cloud Geometry
Compression
- URL: http://arxiv.org/abs/2111.10633v1
- Date: Sat, 20 Nov 2021 17:02:45 GMT
- Title: Sparse Tensor-based Multiscale Representation for Point Cloud Geometry
Compression
- Authors: Jianqiang Wang, Dandan Ding, Zhu Li, Xiaoxing Feng, Chuntong Cao, Zhan
Ma
- Abstract summary: We develop a unified Point Cloud Geometry (PCG) compression method through Sparse Processing (STP) based multiscale representation of voxelized PCG.
Applying the complexity reduces the complexity significantly because it only performs the convolutions centered at Most-Probable Positively-Occupied Voxels (MP-POV)
The proposed method presents lightweight complexity due to point-wise, and tiny storage desire because of model sharing across all scales.
- Score: 18.24902526033056
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study develops a unified Point Cloud Geometry (PCG) compression method
through Sparse Tensor Processing (STP) based multiscale representation of
voxelized PCG, dubbed as the SparsePCGC. Applying the STP reduces the
complexity significantly because it only performs the convolutions centered at
Most-Probable Positively-Occupied Voxels (MP-POV). And the multiscale
representation facilitates us to compress scale-wise MP-POVs progressively. The
overall compression efficiency highly depends on the approximation accuracy of
occupancy probability of each MP-POV. Thus, we design the Sparse Convolution
based Neural Networks (SparseCNN) consisting of sparse convolutions and voxel
re-sampling to extensively exploit priors. We then develop the SparseCNN based
Occupancy Probability Approximation (SOPA) model to estimate the occupancy
probability in a single-stage manner only using the cross-scale prior or in
multi-stage by step-wisely utilizing autoregressive neighbors. Besides, we also
suggest the SparseCNN based Local Neighborhood Embedding (SLNE) to characterize
the local spatial variations as the feature attribute to improve the SOPA. Our
unified approach shows the state-of-art performance in both lossless and lossy
compression modes across a variety of datasets including the dense PCGs (8iVFB,
Owlii) and the sparse LiDAR PCGs (KITTI, Ford) when compared with the MPEG
G-PCC and other popular learning-based compression schemes. Furthermore, the
proposed method presents lightweight complexity due to point-wise computation,
and tiny storage desire because of model sharing across all scales. We make all
materials publicly accessible at https://github.com/NJUVISION/SparsePCGC for
reproducible research.
Related papers
- LoRC: Low-Rank Compression for LLMs KV Cache with a Progressive Compression Strategy [59.1298692559785]
Key-Value ( KV) cache is crucial component in serving transformer-based autoregressive large language models (LLMs)
Existing approaches to mitigate this issue include: (1) efficient attention variants integrated in upcycling stages; (2) KV cache compression at test time; and (3) KV cache compression at test time.
We propose a low-rank approximation of KV weight matrices, allowing plug-in integration with existing transformer-based LLMs without model retraining.
Our method is designed to function without model tuning in upcycling stages or task-specific profiling in test stages.
arXiv Detail & Related papers (2024-10-04T03:10:53Z) - ECNR: Efficient Compressive Neural Representation of Time-Varying
Volumetric Datasets [6.3492793442257085]
compressive neural representation has emerged as a promising alternative to traditional compression methods for managing massive datasets.
This paper presents an efficient neural representation (ECNR) solution for time-varying data compression.
We show the effectiveness of ECNR with multiple datasets and compare it with state-of-the-art compression methods.
arXiv Detail & Related papers (2023-10-02T06:06:32Z) - Binarized Spectral Compressive Imaging [59.18636040850608]
Existing deep learning models for hyperspectral image (HSI) reconstruction achieve good performance but require powerful hardwares with enormous memory and computational resources.
We propose a novel method, Binarized Spectral-Redistribution Network (BiSRNet)
BiSRNet is derived by using the proposed techniques to binarize the base model.
arXiv Detail & Related papers (2023-05-17T15:36:08Z) - CARNet:Compression Artifact Reduction for Point Cloud Attribute [37.78660069355263]
A learning-based adaptive loop filter is developed for the Geometry-based Point Cloud Compression (G-PCC) standard to reduce compression artifacts.
The proposed method first generates multiple Most-Probable Sample Offsets (MPSOs) as potential compression distortion approximations, and then linearly weights them for artifact mitigation.
arXiv Detail & Related papers (2022-09-17T08:05:35Z) - Efficient LiDAR Point Cloud Geometry Compression Through Neighborhood
Point Attention [25.054578678654796]
This work suggests the neighborhood point attention (NPA) to tackle them.
We first use k nearest neighbors (kNN) to construct adaptive local neighborhood.
We then leverage the self-attention mechanism to dynamically aggregate information within this neighborhood.
arXiv Detail & Related papers (2022-08-26T10:44:30Z) - SDQ: Stochastic Differentiable Quantization with Mixed Precision [46.232003346732064]
We present a novel Differentiable Quantization (SDQ) method that can automatically learn the MPQ strategy.
After the optimal MPQ strategy is acquired, we train our network with entropy-aware bin regularization and knowledge distillation.
SDQ outperforms all state-of-the-art mixed datasets or single precision quantization with a lower bitwidth.
arXiv Detail & Related papers (2022-06-09T12:38:18Z) - OPQ: Compressing Deep Neural Networks with One-shot Pruning-Quantization [32.60139548889592]
We propose a novel One-shot Pruning-Quantization (OPQ) in this paper.
OPQ analytically solves the compression allocation with pre-trained weight parameters only.
We propose a unified channel-wise quantization method that enforces all channels of each layer to share a common codebook.
arXiv Detail & Related papers (2022-05-23T09:05:25Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - Communication-Efficient Federated Learning via Quantized Compressed
Sensing [82.10695943017907]
The presented framework consists of gradient compression for wireless devices and gradient reconstruction for a parameter server.
Thanks to gradient sparsification and quantization, our strategy can achieve a higher compression ratio than one-bit gradient compression.
We demonstrate that the framework achieves almost identical performance with the case that performs no compression.
arXiv Detail & Related papers (2021-11-30T02:13:54Z) - Compact representations of convolutional neural networks via weight
pruning and quantization [63.417651529192014]
We propose a novel storage format for convolutional neural networks (CNNs) based on source coding and leveraging both weight pruning and quantization.
We achieve a reduction of space occupancy up to 0.6% on fully connected layers and 5.44% on the whole network, while performing at least as competitive as the baseline.
arXiv Detail & Related papers (2021-08-28T20:39:54Z) - Permute, Quantize, and Fine-tune: Efficient Compression of Neural
Networks [70.0243910593064]
Key to success of vector quantization is deciding which parameter groups should be compressed together.
In this paper we make the observation that the weights of two adjacent layers can be permuted while expressing the same function.
We then establish a connection to rate-distortion theory and search for permutations that result in networks that are easier to compress.
arXiv Detail & Related papers (2020-10-29T15:47:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.