LVQAC: Lattice Vector Quantization Coupled with Spatially Adaptive
Companding for Efficient Learned Image Compression
- URL: http://arxiv.org/abs/2304.12319v1
- Date: Sat, 25 Mar 2023 23:34:15 GMT
- Title: LVQAC: Lattice Vector Quantization Coupled with Spatially Adaptive
Companding for Efficient Learned Image Compression
- Authors: Xi Zhang and Xiaolin Wu
- Abstract summary: We present a novel Lattice Vector Quantization scheme coupled with a spatially Adaptive Companding (LVQAC) mapping.
For any end-to-end CNN image compression models, replacing uniform quantizer by LVQAC achieves better rate-distortion performance without significantly increasing the model complexity.
- Score: 24.812267280543693
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, numerous end-to-end optimized image compression neural networks
have been developed and proved themselves as leaders in rate-distortion
performance. The main strength of these learnt compression methods is in
powerful nonlinear analysis and synthesis transforms that can be facilitated by
deep neural networks. However, out of operational expediency, most of these
end-to-end methods adopt uniform scalar quantizers rather than vector
quantizers, which are information-theoretically optimal. In this paper, we
present a novel Lattice Vector Quantization scheme coupled with a spatially
Adaptive Companding (LVQAC) mapping. LVQ can better exploit the inter-feature
dependencies than scalar uniform quantization while being computationally
almost as simple as the latter. Moreover, to improve the adaptability of LVQ to
source statistics, we couple a spatially adaptive companding (AC) mapping with
LVQ. The resulting LVQAC design can be easily embedded into any end-to-end
optimized image compression system. Extensive experiments demonstrate that for
any end-to-end CNN image compression models, replacing uniform quantizer by
LVQAC achieves better rate-distortion performance without significantly
increasing the model complexity.
Related papers
- Approaching Rate-Distortion Limits in Neural Compression with Lattice
Transform Coding [33.377272636443344]
neural compression design involves transforming the source to a latent vector, which is then rounded to integers and entropy coded.
We show that it is highly sub-optimal on i.i.d. sequences, and in fact always recovers scalar quantization of the original source sequence.
By employing lattice quantization instead of scalar quantization in the latent space, we demonstrate that Lattice Transform Coding (LTC) is able to recover optimal vector quantization at various dimensions.
arXiv Detail & Related papers (2024-03-12T05:09:25Z) - Soft Convex Quantization: Revisiting Vector Quantization with Convex
Optimization [40.1651740183975]
We propose Soft Convex Quantization (SCQ) as a direct substitute for Vector Quantization (VQ)
SCQ works like a differentiable convex optimization (DCO) layer.
We demonstrate its efficacy on the CIFAR-10, GTSRB and LSUN datasets.
arXiv Detail & Related papers (2023-10-04T17:45:14Z) - Joint Hierarchical Priors and Adaptive Spatial Resolution for Efficient
Neural Image Compression [11.25130799452367]
We propose an absolute image compression transformer (ICT) for neural image compression (NIC)
ICT captures both global and local contexts from the latent representations and better parameterize the distribution of the quantized latents.
Our framework significantly improves the trade-off between coding efficiency and decoder complexity over the versatile video coding (VVC) reference encoder (VTM-18.0) and the neural SwinT-ChARM.
arXiv Detail & Related papers (2023-07-05T13:17:14Z) - Weight Re-Mapping for Variational Quantum Algorithms [54.854986762287126]
We introduce the concept of weight re-mapping for variational quantum circuits (VQCs)
We employ seven distinct weight re-mapping functions to assess their impact on eight classification datasets.
Our results indicate that weight re-mapping can enhance the convergence speed of the VQC.
arXiv Detail & Related papers (2023-06-09T09:42:21Z) - NVTC: Nonlinear Vector Transform Coding [35.10187626615328]
In theory, vector quantization (VQ) is always better than scalar quantization (SQ) in terms of rate-distortion (R-D) performance.
Recent state-of-the-art methods for neural image compression are mainly based on nonlinear transform coding (NTC) with uniform scalar quantization.
We propose a novel framework for neural image compression named Vector Transform Coding (NVTC)
arXiv Detail & Related papers (2023-05-25T13:06:38Z) - Effective Invertible Arbitrary Image Rescaling [77.46732646918936]
Invertible Neural Networks (INN) are able to increase upscaling accuracy significantly by optimizing the downscaling and upscaling cycle jointly.
A simple and effective invertible arbitrary rescaling network (IARN) is proposed to achieve arbitrary image rescaling by training only one model in this work.
It is shown to achieve a state-of-the-art (SOTA) performance in bidirectional arbitrary rescaling without compromising perceptual quality in LR outputs.
arXiv Detail & Related papers (2022-09-26T22:22:30Z) - Unified Multivariate Gaussian Mixture for Efficient Neural Image
Compression [151.3826781154146]
latent variables with priors and hyperpriors is an essential problem in variational image compression.
We find inter-correlations and intra-correlations exist when observing latent variables in a vectorized perspective.
Our model has better rate-distortion performance and an impressive $3.18times$ compression speed up.
arXiv Detail & Related papers (2022-03-21T11:44:17Z) - Modeling Image Quantization Tradeoffs for Optimal Compression [0.0]
Lossy compression algorithms target tradeoffs by quantizating high frequency data to increase compression rates.
We propose a new method of optimizing quantization tables using Deep Learning and a minimax loss function.
arXiv Detail & Related papers (2021-12-14T07:35:22Z) - Substitutional Neural Image Compression [48.20906717052056]
Substitutional Neural Image Compression (SNIC) is a general approach for enhancing any neural image compression model.
It boosts compression performance toward a flexible distortion metric and enables bit-rate control using a single model instance.
arXiv Detail & Related papers (2021-05-16T20:53:31Z) - An Efficient Statistical-based Gradient Compression Technique for
Distributed Training Systems [77.88178159830905]
Sparsity-Inducing Distribution-based Compression (SIDCo) is a threshold-based sparsification scheme that enjoys similar threshold estimation quality to deep gradient compression (DGC)
Our evaluation shows SIDCo speeds up training by up to 41:7%, 7:6%, and 1:9% compared to the no-compression baseline, Topk, and DGC compressors, respectively.
arXiv Detail & Related papers (2021-01-26T13:06:00Z) - End-to-End Facial Deep Learning Feature Compression with Teacher-Student
Enhancement [57.18801093608717]
We propose a novel end-to-end feature compression scheme by leveraging the representation and learning capability of deep neural networks.
In particular, the extracted features are compactly coded in an end-to-end manner by optimizing the rate-distortion cost.
We verify the effectiveness of the proposed model with the facial feature, and experimental results reveal better compression performance in terms of rate-accuracy.
arXiv Detail & Related papers (2020-02-10T10:08:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.