NVTC: Nonlinear Vector Transform Coding
- URL: http://arxiv.org/abs/2305.16025v1
- Date: Thu, 25 May 2023 13:06:38 GMT
- Title: NVTC: Nonlinear Vector Transform Coding
- Authors: Runsen Feng, Zongyu Guo, Weiping Li, Zhibo Chen
- Abstract summary: In theory, vector quantization (VQ) is always better than scalar quantization (SQ) in terms of rate-distortion (R-D) performance.
Recent state-of-the-art methods for neural image compression are mainly based on nonlinear transform coding (NTC) with uniform scalar quantization.
We propose a novel framework for neural image compression named Vector Transform Coding (NVTC)
- Score: 35.10187626615328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In theory, vector quantization (VQ) is always better than scalar quantization
(SQ) in terms of rate-distortion (R-D) performance. Recent state-of-the-art
methods for neural image compression are mainly based on nonlinear transform
coding (NTC) with uniform scalar quantization, overlooking the benefits of VQ
due to its exponentially increased complexity. In this paper, we first
investigate on some toy sources, demonstrating that even if modern neural
networks considerably enhance the compression performance of SQ with nonlinear
transform, there is still an insurmountable chasm between SQ and VQ. Therefore,
revolving around VQ, we propose a novel framework for neural image compression
named Nonlinear Vector Transform Coding (NVTC). NVTC solves the critical
complexity issue of VQ through (1) a multi-stage quantization strategy and (2)
nonlinear vector transforms. In addition, we apply entropy-constrained VQ in
latent space to adaptively determine the quantization boundaries for joint
rate-distortion optimization, which improves the performance both theoretically
and experimentally. Compared to previous NTC approaches, NVTC demonstrates
superior rate-distortion performance, faster decoding speed, and smaller model
size. Our code is available at https://github.com/USTC-IMCL/NVTC
Related papers
- Optimal depth and a novel approach to variational quantum process tomography [11.496254312838659]
We present two new methods for Variational Quantum Circuit (VQC) Process Tomography onto $n$ qubits systems: PT_VQC and U-VQSVD.
Compared to the state of the art, PT_VQC halves in each run the required amount of qubits for process tomography.
U-VQSVD outperforms an uninformed attack (using randomly generated input states) by a factor of 2 to 5, depending on the qubit dimension.
arXiv Detail & Related papers (2024-04-25T11:58:06Z) - Approaching Rate-Distortion Limits in Neural Compression with Lattice
Transform Coding [33.377272636443344]
neural compression design involves transforming the source to a latent vector, which is then rounded to integers and entropy coded.
We show that it is highly sub-optimal on i.i.d. sequences, and in fact always recovers scalar quantization of the original source sequence.
By employing lattice quantization instead of scalar quantization in the latent space, we demonstrate that Lattice Transform Coding (LTC) is able to recover optimal vector quantization at various dimensions.
arXiv Detail & Related papers (2024-03-12T05:09:25Z) - Soft Convex Quantization: Revisiting Vector Quantization with Convex
Optimization [40.1651740183975]
We propose Soft Convex Quantization (SCQ) as a direct substitute for Vector Quantization (VQ)
SCQ works like a differentiable convex optimization (DCO) layer.
We demonstrate its efficacy on the CIFAR-10, GTSRB and LSUN datasets.
arXiv Detail & Related papers (2023-10-04T17:45:14Z) - LVQAC: Lattice Vector Quantization Coupled with Spatially Adaptive
Companding for Efficient Learned Image Compression [24.812267280543693]
We present a novel Lattice Vector Quantization scheme coupled with a spatially Adaptive Companding (LVQAC) mapping.
For any end-to-end CNN image compression models, replacing uniform quantizer by LVQAC achieves better rate-distortion performance without significantly increasing the model complexity.
arXiv Detail & Related papers (2023-03-25T23:34:15Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - Alternating Layered Variational Quantum Circuits Can Be Classically
Optimized Efficiently Using Classical Shadows [4.680722019621822]
Variational quantum algorithms (VQAs) are the quantum analog of classical neural networks (NNs)
We introduce a training algorithm with an exponential reduction in training cost of such VQAs.
arXiv Detail & Related papers (2022-08-24T15:47:44Z) - Theoretical Error Performance Analysis for Variational Quantum Circuit
Based Functional Regression [83.79664725059877]
In this work, we put forth an end-to-end quantum neural network, namely, TTN-VQC, for dimensionality reduction and functional regression.
We also characterize the optimization properties of TTN-VQC by leveraging the Polyak-Lojasiewicz (PL) condition.
arXiv Detail & Related papers (2022-06-08T06:54:07Z) - Characterizing the loss landscape of variational quantum circuits [77.34726150561087]
We introduce a way to compute the Hessian of the loss function of VQCs.
We show how this information can be interpreted and compared to classical neural networks.
arXiv Detail & Related papers (2020-08-06T17:48:12Z) - Kernel Quantization for Efficient Network Compression [59.55192551370948]
Kernel Quantization (KQ) aims to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version without significant performance loss.
Inspired by the evolution from weight pruning to filter pruning, we propose to quantize in both kernel and weight level.
Experiments on the ImageNet classification task prove that KQ needs 1.05 and 1.62 bits on average in VGG and ResNet18, respectively, to represent each parameter in the convolution layer.
arXiv Detail & Related papers (2020-03-11T08:00:04Z) - Optimal Gradient Quantization Condition for Communication-Efficient
Distributed Training [99.42912552638168]
Communication of gradients is costly for training deep neural networks with multiple devices in computer vision applications.
In this work, we deduce the optimal condition of both the binary and multi-level gradient quantization for textbfANY gradient distribution.
Based on the optimal condition, we develop two novel quantization schemes: biased BinGrad and unbiased ORQ for binary and multi-level gradient quantization respectively.
arXiv Detail & Related papers (2020-02-25T18:28:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.