End-to-end optimized image compression with competition of prior
distributions
- URL: http://arxiv.org/abs/2111.09172v1
- Date: Wed, 17 Nov 2021 15:04:01 GMT
- Title: End-to-end optimized image compression with competition of prior
distributions
- Authors: Benoit Brummer and Christophe De Vleeschouwer
- Abstract summary: We propose a compression scheme that uses a single convolutional autoencoder and multiple learned prior distributions.
Our method offers rate-distortion performance comparable to that obtained with a predicted parametrized prior.
- Score: 29.585370305561582
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional autoencoders are now at the forefront of image compression
research. To improve their entropy coding, encoder output is typically analyzed
with a second autoencoder to generate per-variable parametrized prior
probability distributions. We instead propose a compression scheme that uses a
single convolutional autoencoder and multiple learned prior distributions
working as a competition of experts. Trained prior distributions are stored in
a static table of cumulative distribution functions. During inference, this
table is used by an entropy coder as a look-up-table to determine the best
prior for each spatial location. Our method offers rate-distortion performance
comparable to that obtained with a predicted parametrized prior with only a
fraction of its entropy coding and decoding complexity.
Related papers
- Learned Compression of Encoding Distributions [1.4732811715354455]
entropy bottleneck is a common component used in many learned compression models.
We propose a method that adapts the encoding distribution to match the latent data distribution for a specific input.
Our method achieves a Bjontegaard-Delta (BD)-rate gain of -7.10% on the Kodak test dataset.
arXiv Detail & Related papers (2024-06-18T21:05:51Z) - Correcting Diffusion-Based Perceptual Image Compression with Privileged End-to-End Decoder [49.01721042973929]
This paper presents a diffusion-based image compression method that employs a privileged end-to-end decoder model as correction.
Experiments demonstrate the superiority of our method in both distortion and perception compared with previous perceptual compression methods.
arXiv Detail & Related papers (2024-04-07T10:57:54Z) - Compression of Structured Data with Autoencoders: Provable Benefit of
Nonlinearities and Depth [83.15263499262824]
We prove that gradient descent converges to a solution that completely disregards the sparse structure of the input.
We show how to improve upon Gaussian performance for the compression of sparse data by adding a denoising function to a shallow architecture.
We validate our findings on image datasets, such as CIFAR-10 and MNIST.
arXiv Detail & Related papers (2024-02-07T16:32:29Z) - Entroformer: A Transformer-based Entropy Model for Learned Image
Compression [17.51693464943102]
We propose a novel transformer-based entropy model, termed Entroformer, to capture long-range dependencies in probability distribution estimation.
The experiments show that the Entroformer achieves state-of-the-art performance on image compression while being time-efficient.
arXiv Detail & Related papers (2022-02-11T08:03:31Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - End-to-End Image Compression with Probabilistic Decoding [31.38636002751645]
We propose a learned image compression framework to support probabilistic decoding.
The proposed framework is dependent on a revertible neural network-based transform to convert pixels into coefficients.
arXiv Detail & Related papers (2021-09-30T04:07:09Z) - Sequential Encryption of Sparse Neural Networks Toward Optimum
Representation of Irregular Sparsity [9.062897838978955]
We study fixed-to-fixed encryption architecture/algorithm to support fine-grained pruning methods.
We demonstrate that our proposed compression scheme achieves almost the maximum compression ratio for the Transformer and ResNet-50.
arXiv Detail & Related papers (2021-05-05T05:14:50Z) - Unfolding Neural Networks for Compressive Multichannel Blind
Deconvolution [71.29848468762789]
We propose a learned-structured unfolding neural network for the problem of compressive sparse multichannel blind-deconvolution.
In this problem, each channel's measurements are given as convolution of a common source signal and sparse filter.
We demonstrate that our method is superior to classical structured compressive sparse multichannel blind-deconvolution methods in terms of accuracy and speed of sparse filter recovery.
arXiv Detail & Related papers (2020-10-22T02:34:33Z) - Modeling Lost Information in Lossy Image Compression [72.69327382643549]
Lossy image compression is one of the most commonly used operators for digital images.
We propose a novel invertible framework called Invertible Lossy Compression (ILC) to largely mitigate the information loss problem.
arXiv Detail & Related papers (2020-06-22T04:04:56Z) - Learning Better Lossless Compression Using Lossy Compression [100.50156325096611]
We leverage the powerful lossy image compression algorithm BPG to build a lossless image compression system.
We model the distribution of the residual with a convolutional neural network-based probabilistic model that is conditioned on the BPG reconstruction.
Finally, the image is stored using the concatenation of the bitstreams produced by BPG and the learned residual coder.
arXiv Detail & Related papers (2020-03-23T11:21:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.