Asymmetric Gained Deep Image Compression With Continuous Rate Adaptation
- URL: http://arxiv.org/abs/2003.02012v3
- Date: Tue, 2 Aug 2022 11:40:48 GMT
- Title: Asymmetric Gained Deep Image Compression With Continuous Rate Adaptation
- Authors: Ze Cui, Jing Wang, Shangyin Gao, Bo Bai, Tiansheng Guo and Yihui Feng
- Abstract summary: We propose a continuously rate adjustable learned image compression framework, Asymmetric Gained Variational Autoencoder (AG-VAE)
AG-VAE utilizes a pair of gain units to achieve discrete rate adaptation in one single model with a negligible additional computation.
Our method achieves comparable quantitative performance with SOTA learned image compression methods and better qualitative performance than classical image codecs.
- Score: 12.009880944927069
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the development of deep learning techniques, the combination of deep
learning with image compression has drawn lots of attention. Recently, learned
image compression methods had exceeded their classical counterparts in terms of
rate-distortion performance. However, continuous rate adaptation remains an
open question. Some learned image compression methods use multiple networks for
multiple rates, while others use one single model at the expense of
computational complexity increase and performance degradation. In this paper,
we propose a continuously rate adjustable learned image compression framework,
Asymmetric Gained Variational Autoencoder (AG-VAE). AG-VAE utilizes a pair of
gain units to achieve discrete rate adaptation in one single model with a
negligible additional computation. Then, by using exponential interpolation,
continuous rate adaptation is achieved without compromising performance.
Besides, we propose the asymmetric Gaussian entropy model for more accurate
entropy estimation. Exhaustive experiments show that our method achieves
comparable quantitative performance with SOTA learned image compression methods
and better qualitative performance than classical image codecs. In the ablation
study, we confirm the usefulness and superiority of gain units and the
asymmetric Gaussian entropy model.
Related papers
- Progressive Learning with Visual Prompt Tuning for Variable-Rate Image
Compression [60.689646881479064]
We propose a progressive learning paradigm for transformer-based variable-rate image compression.
Inspired by visual prompt tuning, we use LPM to extract prompts for input images and hidden features at the encoder side and decoder side, respectively.
Our model outperforms all current variable image methods in terms of rate-distortion performance and approaches the state-of-the-art fixed image compression methods trained from scratch.
arXiv Detail & Related papers (2023-11-23T08:29:32Z) - High-Fidelity Variable-Rate Image Compression via Invertible Activation
Transformation [24.379052026260034]
We propose the Invertible Activation Transformation (IAT) module to tackle the issue of high-fidelity fine variable-rate image compression.
IAT and QLevel together give the image compression model the ability of fine variable-rate control while better maintaining the image fidelity.
Our method outperforms the state-of-the-art variable-rate image compression method by a large margin, especially after multiple re-encodings.
arXiv Detail & Related papers (2022-09-12T07:14:07Z) - Estimating the Resize Parameter in End-to-end Learned Image Compression [50.20567320015102]
We describe a search-free resizing framework that can further improve the rate-distortion tradeoff of recent learned image compression models.
Our results show that our new resizing parameter estimation framework can provide Bjontegaard-Delta rate (BD-rate) improvement of about 10% against leading perceptual quality engines.
arXiv Detail & Related papers (2022-04-26T01:35:02Z) - Unified Multivariate Gaussian Mixture for Efficient Neural Image
Compression [151.3826781154146]
latent variables with priors and hyperpriors is an essential problem in variational image compression.
We find inter-correlations and intra-correlations exist when observing latent variables in a vectorized perspective.
Our model has better rate-distortion performance and an impressive $3.18times$ compression speed up.
arXiv Detail & Related papers (2022-03-21T11:44:17Z) - Post-Training Quantization for Cross-Platform Learned Image Compression [15.67527732099067]
It has been witnessed that learned image compression has outperformed conventional image coding techniques.
One of the most critical issues that need to be considered is the non-deterministic calculation.
We propose to solve this problem by introducing well-developed post-training quantization.
arXiv Detail & Related papers (2022-02-15T15:41:12Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Substitutional Neural Image Compression [48.20906717052056]
Substitutional Neural Image Compression (SNIC) is a general approach for enhancing any neural image compression model.
It boosts compression performance toward a flexible distortion metric and enables bit-rate control using a single model instance.
arXiv Detail & Related papers (2021-05-16T20:53:31Z) - Variational Bayesian Quantization [31.999462074510305]
We propose a novel algorithm for quantizing continuous latent representations in trained models.
Unlike current end-to-end neural compression methods that cater the model to a fixed quantization scheme, our algorithm separates model design and training from quantization.
Our algorithm can be seen as a novel extension of arithmetic coding to the continuous domain.
arXiv Detail & Related papers (2020-02-18T00:15:37Z) - Learning End-to-End Lossy Image Compression: A Benchmark [90.35363142246806]
We first conduct a comprehensive literature survey of learned image compression methods.
We describe milestones in cutting-edge learned image-compression methods, review a broad range of existing works, and provide insights into their historical development routes.
By introducing a coarse-to-fine hyperprior model for entropy estimation and signal reconstruction, we achieve improved rate-distortion performance.
arXiv Detail & Related papers (2020-02-10T13:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.