Soft Convex Quantization: Revisiting Vector Quantization with Convex
Optimization
- URL: http://arxiv.org/abs/2310.03004v1
- Date: Wed, 4 Oct 2023 17:45:14 GMT
- Title: Soft Convex Quantization: Revisiting Vector Quantization with Convex
Optimization
- Authors: Tanmay Gautam, Reid Pryzant, Ziyi Yang, Chenguang Zhu, Somayeh Sojoudi
- Abstract summary: We propose Soft Convex Quantization (SCQ) as a direct substitute for Vector Quantization (VQ)
SCQ works like a differentiable convex optimization (DCO) layer.
We demonstrate its efficacy on the CIFAR-10, GTSRB and LSUN datasets.
- Score: 40.1651740183975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vector Quantization (VQ) is a well-known technique in deep learning for
extracting informative discrete latent representations. VQ-embedded models have
shown impressive results in a range of applications including image and speech
generation. VQ operates as a parametric K-means algorithm that quantizes inputs
using a single codebook vector in the forward pass. While powerful, this
technique faces practical challenges including codebook collapse,
non-differentiability and lossy compression. To mitigate the aforementioned
issues, we propose Soft Convex Quantization (SCQ) as a direct substitute for
VQ. SCQ works like a differentiable convex optimization (DCO) layer: in the
forward pass, we solve for the optimal convex combination of codebook vectors
that quantize the inputs. In the backward pass, we leverage differentiability
through the optimality conditions of the forward solution. We then introduce a
scalable relaxation of the SCQ optimization and demonstrate its efficacy on the
CIFAR-10, GTSRB and LSUN datasets. We train powerful SCQ autoencoder models
that significantly outperform matched VQ-based architectures, observing an
order of magnitude better image reconstruction and codebook usage with
comparable quantization runtime.
Related papers
- HyperVQ: MLR-based Vector Quantization in Hyperbolic Space [56.4245885674567]
We study the use of hyperbolic spaces for vector quantization (HyperVQ)
We show that hyperVQ performs comparably in reconstruction and generative tasks while outperforming VQ in discriminative tasks and learning a highly disentangled latent space.
arXiv Detail & Related papers (2024-03-18T03:17:08Z) - Finite Scalar Quantization: VQ-VAE Made Simple [26.351016719675766]
We propose to replace vector quantization (VQ) in the latent representation of VQ-VAEs with a simple scheme termed finite scalar quantization (FSQ)
By appropriately choosing the number of dimensions and values each dimension can take, we obtain the same codebook size as in VQ.
We employ FSQ with MaskGIT for image generation, and with UViM for depth estimation, colorization, and panoptic segmentation.
arXiv Detail & Related papers (2023-09-27T09:13:40Z) - Online Clustered Codebook [100.1650001618827]
We present a simple alternative method for online codebook learning, Clustering VQ-VAE (CVQ-VAE)
Our approach selects encoded features as anchors to update the dead'' codevectors, while optimising the codebooks which are alive via the original loss.
Our CVQ-VAE can be easily integrated into the existing models with just a few lines of code.
arXiv Detail & Related papers (2023-07-27T18:31:04Z) - LVQAC: Lattice Vector Quantization Coupled with Spatially Adaptive
Companding for Efficient Learned Image Compression [24.812267280543693]
We present a novel Lattice Vector Quantization scheme coupled with a spatially Adaptive Companding (LVQAC) mapping.
For any end-to-end CNN image compression models, replacing uniform quantizer by LVQAC achieves better rate-distortion performance without significantly increasing the model complexity.
arXiv Detail & Related papers (2023-03-25T23:34:15Z) - Vector Quantized Wasserstein Auto-Encoder [57.29764749855623]
We study learning deep discrete representations from the generative viewpoint.
We endow discrete distributions over sequences of codewords and learn a deterministic decoder that transports the distribution over the sequences of codewords to the data distribution.
We develop further theories to connect it with the clustering viewpoint of WS distance, allowing us to have a better and more controllable clustering solution.
arXiv Detail & Related papers (2023-02-12T13:51:36Z) - VQFR: Blind Face Restoration with Vector-Quantized Dictionary and
Parallel Decoder [83.63843671885716]
We propose a VQ-based face restoration method -- VQFR.
VQFR takes advantage of high-quality low-level feature banks extracted from high-quality faces.
To further fuse low-level features from inputs while not "contaminating" the realistic details generated from the VQ codebook, we proposed a parallel decoder.
arXiv Detail & Related papers (2022-05-13T17:54:40Z) - Autoregressive Image Generation using Residual Quantization [40.04085054791994]
We propose a two-stage framework to generate high-resolution images.
The framework consists of Residual-Quantized VAE (RQ-VAE) and RQ-Transformer.
Our approach has a significantly faster sampling speed than previous AR models to generate high-quality images.
arXiv Detail & Related papers (2022-03-03T11:44:46Z) - Scaling Quantum Approximate Optimization on Near-term Hardware [49.94954584453379]
We quantify scaling of the expected resource requirements by optimized circuits for hardware architectures with varying levels of connectivity.
We show the number of measurements, and hence total time to synthesizing solution, grows exponentially in problem size and problem graph degree.
These problems may be alleviated by increasing hardware connectivity or by recently proposed modifications to the QAOA that achieve higher performance with fewer circuit layers.
arXiv Detail & Related papers (2022-01-06T21:02:30Z) - Classically optimal variational quantum algorithms [0.0]
Hybrid quantum-classical algorithms, such as variational quantum algorithms (VQA), are suitable for implementation on NISQ computers.
In this Letter we expand an implicit step of VQAs: the classical pre-computation subroutine which can non-trivially use classical algorithms to simplify, transform, or specify problem instance-specific variational quantum circuits.
arXiv Detail & Related papers (2021-03-31T13:33:38Z) - Layer VQE: A Variational Approach for Combinatorial Optimization on
Noisy Quantum Computers [5.644434841659249]
We propose an iterative Layer VQE (L-VQE) approach, inspired by the Variational Quantum Eigensolver (VQE)
We show that L-VQE is more robust to finite sampling errors and has a higher chance of finding the solution as compared with standard VQE approaches.
Our simulation results show that L-VQE performs well under realistic hardware noise.
arXiv Detail & Related papers (2021-02-10T16:53:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.