NVTC: Nonlinear Vector Transform Coding
- URL: http://arxiv.org/abs/2305.16025v1
- Date: Thu, 25 May 2023 13:06:38 GMT
- Title: NVTC: Nonlinear Vector Transform Coding
- Authors: Runsen Feng, Zongyu Guo, Weiping Li, Zhibo Chen
- Abstract summary: In theory, vector quantization (VQ) is always better than scalar quantization (SQ) in terms of rate-distortion (R-D) performance.
Recent state-of-the-art methods for neural image compression are mainly based on nonlinear transform coding (NTC) with uniform scalar quantization.
We propose a novel framework for neural image compression named Vector Transform Coding (NVTC)
- Score: 35.10187626615328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In theory, vector quantization (VQ) is always better than scalar quantization
(SQ) in terms of rate-distortion (R-D) performance. Recent state-of-the-art
methods for neural image compression are mainly based on nonlinear transform
coding (NTC) with uniform scalar quantization, overlooking the benefits of VQ
due to its exponentially increased complexity. In this paper, we first
investigate on some toy sources, demonstrating that even if modern neural
networks considerably enhance the compression performance of SQ with nonlinear
transform, there is still an insurmountable chasm between SQ and VQ. Therefore,
revolving around VQ, we propose a novel framework for neural image compression
named Nonlinear Vector Transform Coding (NVTC). NVTC solves the critical
complexity issue of VQ through (1) a multi-stage quantization strategy and (2)
nonlinear vector transforms. In addition, we apply entropy-constrained VQ in
latent space to adaptively determine the quantization boundaries for joint
rate-distortion optimization, which improves the performance both theoretically
and experimentally. Compared to previous NTC approaches, NVTC demonstrates
superior rate-distortion performance, faster decoding speed, and smaller model
size. Our code is available at https://github.com/USTC-IMCL/NVTC
Related papers
- Learning Optimal Lattice Vector Quantizers for End-to-end Neural Image Compression [16.892815659154053]
Lattice vector quantization (LVQ) presents a compelling alternative, which can exploit inter-feature dependencies more effectively.
Traditional LVQ structures are designed/optimized for uniform source distributions.
We propose a novel learning method to overcome this weakness by designing the rate-distortion optimal lattice vector quantization codebooks.
arXiv Detail & Related papers (2024-11-25T06:05:08Z) - Variable-size Symmetry-based Graph Fourier Transforms for image compression [65.7352685872625]
We propose a new family of Symmetry-based Graph Fourier Transforms of variable sizes into a coding framework.
Our proposed algorithm generates symmetric graphs on the grid by adding specific symmetrical connections between nodes.
Experiments show that SBGFTs outperform the primary transforms integrated in the explicit Multiple Transform Selection.
arXiv Detail & Related papers (2024-11-24T13:00:44Z) - Optimal depth and a novel approach to variational quantum process tomography [11.496254312838659]
We present two new methods for Variational Quantum Circuit (VQC) Process Tomography onto $n$ qubits systems: PT_VQC and U-VQSVD.
Compared to the state of the art, PT_VQC halves in each run the required amount of qubits for process tomography.
U-VQSVD outperforms an uninformed attack (using randomly generated input states) by a factor of 2 to 5, depending on the qubit dimension.
arXiv Detail & Related papers (2024-04-25T11:58:06Z) - Approaching Rate-Distortion Limits in Neural Compression with Lattice
Transform Coding [33.377272636443344]
neural compression design involves transforming the source to a latent vector, which is then rounded to integers and entropy coded.
We show that it is highly sub-optimal on i.i.d. sequences, and in fact always recovers scalar quantization of the original source sequence.
By employing lattice quantization instead of scalar quantization in the latent space, we demonstrate that Lattice Transform Coding (LTC) is able to recover optimal vector quantization at various dimensions.
arXiv Detail & Related papers (2024-03-12T05:09:25Z) - Soft Convex Quantization: Revisiting Vector Quantization with Convex
Optimization [40.1651740183975]
We propose Soft Convex Quantization (SCQ) as a direct substitute for Vector Quantization (VQ)
SCQ works like a differentiable convex optimization (DCO) layer.
We demonstrate its efficacy on the CIFAR-10, GTSRB and LSUN datasets.
arXiv Detail & Related papers (2023-10-04T17:45:14Z) - Pre-training Tensor-Train Networks Facilitates Machine Learning with Variational Quantum Circuits [70.97518416003358]
Variational quantum circuits (VQCs) hold promise for quantum machine learning on noisy intermediate-scale quantum (NISQ) devices.
While tensor-train networks (TTNs) can enhance VQC representation and generalization, the resulting hybrid model, TTN-VQC, faces optimization challenges due to the Polyak-Lojasiewicz (PL) condition.
To mitigate this challenge, we introduce Pre+TTN-VQC, a pre-trained TTN model combined with a VQC.
arXiv Detail & Related papers (2023-05-18T03:08:18Z) - LVQAC: Lattice Vector Quantization Coupled with Spatially Adaptive
Companding for Efficient Learned Image Compression [24.812267280543693]
We present a novel Lattice Vector Quantization scheme coupled with a spatially Adaptive Companding (LVQAC) mapping.
For any end-to-end CNN image compression models, replacing uniform quantizer by LVQAC achieves better rate-distortion performance without significantly increasing the model complexity.
arXiv Detail & Related papers (2023-03-25T23:34:15Z) - Theoretical Error Performance Analysis for Variational Quantum Circuit
Based Functional Regression [83.79664725059877]
In this work, we put forth an end-to-end quantum neural network, namely, TTN-VQC, for dimensionality reduction and functional regression.
We also characterize the optimization properties of TTN-VQC by leveraging the Polyak-Lojasiewicz (PL) condition.
arXiv Detail & Related papers (2022-06-08T06:54:07Z) - Characterizing the loss landscape of variational quantum circuits [77.34726150561087]
We introduce a way to compute the Hessian of the loss function of VQCs.
We show how this information can be interpreted and compared to classical neural networks.
arXiv Detail & Related papers (2020-08-06T17:48:12Z) - Kernel Quantization for Efficient Network Compression [59.55192551370948]
Kernel Quantization (KQ) aims to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version without significant performance loss.
Inspired by the evolution from weight pruning to filter pruning, we propose to quantize in both kernel and weight level.
Experiments on the ImageNet classification task prove that KQ needs 1.05 and 1.62 bits on average in VGG and ResNet18, respectively, to represent each parameter in the convolution layer.
arXiv Detail & Related papers (2020-03-11T08:00:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.