Learned transform compression with optimized entropy encoding
- URL: http://arxiv.org/abs/2104.03305v1
- Date: Wed, 7 Apr 2021 17:58:01 GMT
- Title: Learned transform compression with optimized entropy encoding
- Authors: Magda Gregorov\'a, Marc Desaules, Alexandros Kalousis
- Abstract summary: We consider the problem of learned transform compression where we learn both, the transform and the probability distribution over the discrete codes.
We employ a soft relaxation of the quantization operation to allow for back-propagation of gradients and employ vector (rather than scalar) quantization of the latent codes.
- Score: 72.20409648915398
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of learned transform compression where we learn both,
the transform as well as the probability distribution over the discrete codes.
We utilize a soft relaxation of the quantization operation to allow for
back-propagation of gradients and employ vector (rather than scalar)
quantization of the latent codes. Furthermore, we apply similar relaxation in
the code probability assignments enabling direct optimization of the code
entropy. To the best of our knowledge, this approach is completely novel. We
conduct a set of proof-of concept experiments confirming the potency of our
approaches.
Related papers
- Flattened one-bit stochastic gradient descent: compressed distributed optimization with controlled variance [55.01966743652196]
We propose a novel algorithm for distributed gradient descent (SGD) with compressed gradient communication in the parameter-server framework.
Our gradient compression technique, named flattened one-bit gradient descent (FO-SGD), relies on two simple algorithmic ideas.
arXiv Detail & Related papers (2024-05-17T21:17:27Z) - Projective squeezing for translation symmetric bosonic codes [0.16777183511743468]
We introduce the textitprojective squeezing (PS) method for computing outcomes for a higher squeezing level.
We numerically verify our analytical arguments and show that our protocol can mitigate the effect of photon loss.
arXiv Detail & Related papers (2024-03-21T08:19:47Z) - Approaching Rate-Distortion Limits in Neural Compression with Lattice
Transform Coding [33.377272636443344]
neural compression design involves transforming the source to a latent vector, which is then rounded to integers and entropy coded.
We show that it is highly sub-optimal on i.i.d. sequences, and in fact always recovers scalar quantization of the original source sequence.
By employing lattice quantization instead of scalar quantization in the latent space, we demonstrate that Lattice Transform Coding (LTC) is able to recover optimal vector quantization at various dimensions.
arXiv Detail & Related papers (2024-03-12T05:09:25Z) - Quick Dense Retrievers Consume KALE: Post Training Kullback Leibler
Alignment of Embeddings for Asymmetrical dual encoders [89.29256833403169]
We introduce Kullback Leibler Alignment of Embeddings (KALE), an efficient and accurate method for increasing the inference efficiency of dense retrieval methods.
KALE extends traditional Knowledge Distillation after bi-encoder training, allowing for effective query encoder compression without full retraining or index generation.
Using KALE and asymmetric training, we can generate models which exceed the performance of DistilBERT despite having 3x faster inference.
arXiv Detail & Related papers (2023-03-31T15:44:13Z) - LVQAC: Lattice Vector Quantization Coupled with Spatially Adaptive
Companding for Efficient Learned Image Compression [24.812267280543693]
We present a novel Lattice Vector Quantization scheme coupled with a spatially Adaptive Companding (LVQAC) mapping.
For any end-to-end CNN image compression models, replacing uniform quantizer by LVQAC achieves better rate-distortion performance without significantly increasing the model complexity.
arXiv Detail & Related papers (2023-03-25T23:34:15Z) - Unified Multivariate Gaussian Mixture for Efficient Neural Image
Compression [151.3826781154146]
latent variables with priors and hyperpriors is an essential problem in variational image compression.
We find inter-correlations and intra-correlations exist when observing latent variables in a vectorized perspective.
Our model has better rate-distortion performance and an impressive $3.18times$ compression speed up.
arXiv Detail & Related papers (2022-03-21T11:44:17Z) - End-to-end optimized image compression with competition of prior
distributions [29.585370305561582]
We propose a compression scheme that uses a single convolutional autoencoder and multiple learned prior distributions.
Our method offers rate-distortion performance comparable to that obtained with a predicted parametrized prior.
arXiv Detail & Related papers (2021-11-17T15:04:01Z) - Sigma-Delta and Distributed Noise-Shaping Quantization Methods for
Random Fourier Features [73.25551965751603]
We prove that our quantized RFFs allow a high accuracy approximation of the underlying kernels.
We show that the quantized RFFs can be further compressed, yielding an excellent trade-off between memory use and accuracy.
We empirically show by testing the performance of our methods on several machine learning tasks that our method compares favorably to other state of the art quantization methods in this context.
arXiv Detail & Related papers (2021-06-04T17:24:47Z) - Unfolding Neural Networks for Compressive Multichannel Blind
Deconvolution [71.29848468762789]
We propose a learned-structured unfolding neural network for the problem of compressive sparse multichannel blind-deconvolution.
In this problem, each channel's measurements are given as convolution of a common source signal and sparse filter.
We demonstrate that our method is superior to classical structured compressive sparse multichannel blind-deconvolution methods in terms of accuracy and speed of sparse filter recovery.
arXiv Detail & Related papers (2020-10-22T02:34:33Z) - Continual Learning from the Perspective of Compression [28.90542302130312]
Connectionist models such as neural networks suffer from catastrophic forgetting.
We propose a new continual learning method that combines ML plug-in and Bayesian mixture codes.
arXiv Detail & Related papers (2020-06-26T16:15:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.