Lossy Image Compression with Quantized Hierarchical VAEs
- URL: http://arxiv.org/abs/2208.13056v2
- Date: Sat, 25 Mar 2023 15:52:29 GMT
- Title: Lossy Image Compression with Quantized Hierarchical VAEs
- Authors: Zhihao Duan, Ming Lu, Zhan Ma, Fengqing Zhu
- Abstract summary: ResNet VAEs are originally designed for data (image) distribution modeling.
We present a powerful and efficient model that outperforms previous methods on natural image lossy compression.
Our model compresses images in a coarse-to-fine fashion and supports parallel encoding and decoding.
- Score: 33.173021636656465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research has shown a strong theoretical connection between variational
autoencoders (VAEs) and the rate-distortion theory. Motivated by this, we
consider the problem of lossy image compression from the perspective of
generative modeling. Starting with ResNet VAEs, which are originally designed
for data (image) distribution modeling, we redesign their latent variable model
using a quantization-aware posterior and prior, enabling easy quantization and
entropy coding at test time. Along with improved neural network architecture,
we present a powerful and efficient model that outperforms previous methods on
natural image lossy compression. Our model compresses images in a
coarse-to-fine fashion and supports parallel encoding and decoding, leading to
fast execution on GPUs. Code is available at
https://github.com/duanzhiihao/lossy-vae.
Related papers
- Multiscale Augmented Normalizing Flows for Image Compression [17.441496966834933]
We present a novel concept, which adapts the hierarchical latent space for augmented normalizing flows, an invertible latent variable model.
Our best performing model achieved average rate savings of more than 7% over comparable single-scale models.
arXiv Detail & Related papers (2023-05-09T13:42:43Z) - Image Compression with Product Quantized Masked Image Modeling [44.15706119017024]
Recent neural compression methods have been based on the popular hyperprior framework.
It relies on Scalar Quantization and offers a very strong compression performance.
This contrasts from recent advances in image generation and representation learning, where Vector Quantization is more commonly employed.
arXiv Detail & Related papers (2022-12-14T17:50:39Z) - Lossy Image Compression with Conditional Diffusion Models [25.158390422252097]
This paper outlines an end-to-end optimized lossy image compression framework using diffusion generative models.
In contrast to VAE-based neural compression, where the (mean) decoder is a deterministic neural network, our decoder is a conditional diffusion model.
Our approach yields stronger reported FID scores than the GAN-based model, while also yielding competitive performance with VAE-based models in several distortion metrics.
arXiv Detail & Related papers (2022-09-14T21:53:27Z) - Video Coding Using Learned Latent GAN Compression [1.6058099298620423]
We leverage the generative capacity of GANs such as StyleGAN to represent and compress a video.
Each frame is inverted in the latent space of StyleGAN, from which the optimal compression is learned.
arXiv Detail & Related papers (2022-07-09T19:07:43Z) - Estimating the Resize Parameter in End-to-end Learned Image Compression [50.20567320015102]
We describe a search-free resizing framework that can further improve the rate-distortion tradeoff of recent learned image compression models.
Our results show that our new resizing parameter estimation framework can provide Bjontegaard-Delta rate (BD-rate) improvement of about 10% against leading perceptual quality engines.
arXiv Detail & Related papers (2022-04-26T01:35:02Z) - The Devil Is in the Details: Window-based Attention for Image
Compression [58.1577742463617]
Most existing learned image compression models are based on Convolutional Neural Networks (CNNs)
In this paper, we study the effects of multiple kinds of attention mechanisms for local features learning, then introduce a more straightforward yet effective window-based local attention block.
The proposed window-based attention is very flexible which could work as a plug-and-play component to enhance CNN and Transformer models.
arXiv Detail & Related papers (2022-03-16T07:55:49Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Lossless Compression with Latent Variable Models [4.289574109162585]
We use latent variable models, which we call 'bits back with asymmetric numeral systems' (BB-ANS)
The method involves interleaving encode and decode steps, and achieves an optimal rate when compressing batches of data.
We describe 'Craystack', a modular software framework which we have developed for rapid prototyping of compression using deep generative models.
arXiv Detail & Related papers (2021-04-21T14:03:05Z) - Conditional Entropy Coding for Efficient Video Compression [82.35389813794372]
We propose a very simple and efficient video compression framework that only focuses on modeling the conditional entropy between frames.
We first show that a simple architecture modeling the entropy between the image latent codes is as competitive as other neural video compression works and video codecs.
We then propose a novel internal learning extension on top of this architecture that brings an additional 10% savings without trading off decoding speed.
arXiv Detail & Related papers (2020-08-20T20:01:59Z) - Quantization Guided JPEG Artifact Correction [69.04777875711646]
We develop a novel architecture for artifact correction using the JPEG files quantization matrix.
This allows our single model to achieve state-of-the-art performance over models trained for specific quality settings.
arXiv Detail & Related papers (2020-04-17T00:10:08Z) - Learning End-to-End Lossy Image Compression: A Benchmark [90.35363142246806]
We first conduct a comprehensive literature survey of learned image compression methods.
We describe milestones in cutting-edge learned image-compression methods, review a broad range of existing works, and provide insights into their historical development routes.
By introducing a coarse-to-fine hyperprior model for entropy estimation and signal reconstruction, we achieve improved rate-distortion performance.
arXiv Detail & Related papers (2020-02-10T13:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.