Lossless Compression with Latent Variable Models
- URL: http://arxiv.org/abs/2104.10544v2
- Date: Thu, 22 Apr 2021 09:28:41 GMT
- Title: Lossless Compression with Latent Variable Models
- Authors: James Townsend
- Abstract summary: We use latent variable models, which we call 'bits back with asymmetric numeral systems' (BB-ANS)
The method involves interleaving encode and decode steps, and achieves an optimal rate when compressing batches of data.
We describe 'Craystack', a modular software framework which we have developed for rapid prototyping of compression using deep generative models.
- Score: 4.289574109162585
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We develop a simple and elegant method for lossless compression using latent
variable models, which we call 'bits back with asymmetric numeral systems'
(BB-ANS). The method involves interleaving encode and decode steps, and
achieves an optimal rate when compressing batches of data. We demonstrate it
firstly on the MNIST test set, showing that state-of-the-art lossless
compression is possible using a small variational autoencoder (VAE) model. We
then make use of a novel empirical insight, that fully convolutional generative
models, trained on small images, are able to generalize to images of arbitrary
size, and extend BB-ANS to hierarchical latent variable models, enabling
state-of-the-art lossless compression of full-size colour images from the
ImageNet dataset. We describe 'Craystack', a modular software framework which
we have developed for rapid prototyping of compression using deep generative
models.
Related papers
- Lossless and Near-Lossless Compression for Foundation Models [11.307357041746865]
We investigate the source of model compressibility, introduce compression variants tailored for models and categorize models to compressibility groups.
We estimate that these methods could save over an ExaByte per month of network traffic downloaded from a large model hub like HuggingFace.
arXiv Detail & Related papers (2024-04-05T16:52:55Z) - Transferable Learned Image Compression-Resistant Adversarial Perturbations [66.46470251521947]
Adversarial attacks can readily disrupt the image classification system, revealing the vulnerability of DNN-based recognition tasks.
We introduce a new pipeline that targets image classification models that utilize learned image compressors as pre-processing modules.
arXiv Detail & Related papers (2024-01-06T03:03:28Z) - Progressive Learning with Visual Prompt Tuning for Variable-Rate Image
Compression [60.689646881479064]
We propose a progressive learning paradigm for transformer-based variable-rate image compression.
Inspired by visual prompt tuning, we use LPM to extract prompts for input images and hidden features at the encoder side and decoder side, respectively.
Our model outperforms all current variable image methods in terms of rate-distortion performance and approaches the state-of-the-art fixed image compression methods trained from scratch.
arXiv Detail & Related papers (2023-11-23T08:29:32Z) - Multiscale Augmented Normalizing Flows for Image Compression [17.441496966834933]
We present a novel concept, which adapts the hierarchical latent space for augmented normalizing flows, an invertible latent variable model.
Our best performing model achieved average rate savings of more than 7% over comparable single-scale models.
arXiv Detail & Related papers (2023-05-09T13:42:43Z) - Lossy Image Compression with Quantized Hierarchical VAEs [33.173021636656465]
ResNet VAEs are originally designed for data (image) distribution modeling.
We present a powerful and efficient model that outperforms previous methods on natural image lossy compression.
Our model compresses images in a coarse-to-fine fashion and supports parallel encoding and decoding.
arXiv Detail & Related papers (2022-08-27T17:15:38Z) - Estimating the Resize Parameter in End-to-end Learned Image Compression [50.20567320015102]
We describe a search-free resizing framework that can further improve the rate-distortion tradeoff of recent learned image compression models.
Our results show that our new resizing parameter estimation framework can provide Bjontegaard-Delta rate (BD-rate) improvement of about 10% against leading perceptual quality engines.
arXiv Detail & Related papers (2022-04-26T01:35:02Z) - Split Hierarchical Variational Compression [21.474095984110622]
Variational autoencoders (VAEs) have witnessed great success in performing the compression of image datasets.
SHVC introduces an efficient autoregressive sub-pixel convolution, that allows a generalisation between per-pixel autoregressions and fully factorised probability models.
arXiv Detail & Related papers (2022-04-05T09:13:38Z) - Modeling Lost Information in Lossy Image Compression [72.69327382643549]
Lossy image compression is one of the most commonly used operators for digital images.
We propose a novel invertible framework called Invertible Lossy Compression (ILC) to largely mitigate the information loss problem.
arXiv Detail & Related papers (2020-06-22T04:04:56Z) - Quantization Guided JPEG Artifact Correction [69.04777875711646]
We develop a novel architecture for artifact correction using the JPEG files quantization matrix.
This allows our single model to achieve state-of-the-art performance over models trained for specific quality settings.
arXiv Detail & Related papers (2020-04-17T00:10:08Z) - Learning End-to-End Lossy Image Compression: A Benchmark [90.35363142246806]
We first conduct a comprehensive literature survey of learned image compression methods.
We describe milestones in cutting-edge learned image-compression methods, review a broad range of existing works, and provide insights into their historical development routes.
By introducing a coarse-to-fine hyperprior model for entropy estimation and signal reconstruction, we achieve improved rate-distortion performance.
arXiv Detail & Related papers (2020-02-10T13:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.