Estimating the Resize Parameter in End-to-end Learned Image Compression
- URL: http://arxiv.org/abs/2204.12022v1
- Date: Tue, 26 Apr 2022 01:35:02 GMT
- Title: Estimating the Resize Parameter in End-to-end Learned Image Compression
- Authors: Li-Heng Chen and Christos G. Bampis and Zhi Li and Luk\'a\v{s} Krasula
and Alan C. Bovik
- Abstract summary: We describe a search-free resizing framework that can further improve the rate-distortion tradeoff of recent learned image compression models.
Our results show that our new resizing parameter estimation framework can provide Bjontegaard-Delta rate (BD-rate) improvement of about 10% against leading perceptual quality engines.
- Score: 50.20567320015102
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We describe a search-free resizing framework that can further improve the
rate-distortion tradeoff of recent learned image compression models. Our
approach is simple: compose a pair of differentiable downsampling/upsampling
layers that sandwich a neural compression model. To determine resize factors
for different inputs, we utilize another neural network jointly trained with
the compression model, with the end goal of minimizing the rate-distortion
objective. Our results suggest that "compression friendly" downsampled
representations can be quickly determined during encoding by using an auxiliary
network and differentiable image warping. By conducting extensive experimental
tests on existing deep image compression models, we show results that our new
resizing parameter estimation framework can provide Bj{\o}ntegaard-Delta rate
(BD-rate) improvement of about 10% against leading perceptual quality engines.
We also carried out a subjective quality study, the results of which show that
our new approach yields favorable compressed images. To facilitate reproducible
research in this direction, the implementation used in this paper is being made
freely available online at: https://github.com/treammm/ResizeCompression.
Related papers
- Transferable Learned Image Compression-Resistant Adversarial Perturbations [66.46470251521947]
Adversarial attacks can readily disrupt the image classification system, revealing the vulnerability of DNN-based recognition tasks.
We introduce a new pipeline that targets image classification models that utilize learned image compressors as pre-processing modules.
arXiv Detail & Related papers (2024-01-06T03:03:28Z) - Machine Perception-Driven Image Compression: A Layered Generative
Approach [32.23554195427311]
layered generative image compression model is proposed to achieve high human vision-oriented image reconstructed quality.
Task-agnostic learning-based compression model is proposed, which effectively supports various compressed domain-based analytical tasks.
Joint optimization schedule is adopted to acquire best balance point among compression ratio, reconstructed image quality, and downstream perception performance.
arXiv Detail & Related papers (2023-04-14T02:12:38Z) - High-Fidelity Variable-Rate Image Compression via Invertible Activation
Transformation [24.379052026260034]
We propose the Invertible Activation Transformation (IAT) module to tackle the issue of high-fidelity fine variable-rate image compression.
IAT and QLevel together give the image compression model the ability of fine variable-rate control while better maintaining the image fidelity.
Our method outperforms the state-of-the-art variable-rate image compression method by a large margin, especially after multiple re-encodings.
arXiv Detail & Related papers (2022-09-12T07:14:07Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Variable-Rate Deep Image Compression through Spatially-Adaptive Feature
Transform [58.60004238261117]
We propose a versatile deep image compression network based on Spatial Feature Transform (SFT arXiv:1804.02815)
Our model covers a wide range of compression rates using a single model, which is controlled by arbitrary pixel-wise quality maps.
The proposed framework allows us to perform task-aware image compressions for various tasks.
arXiv Detail & Related papers (2021-08-21T17:30:06Z) - Substitutional Neural Image Compression [48.20906717052056]
Substitutional Neural Image Compression (SNIC) is a general approach for enhancing any neural image compression model.
It boosts compression performance toward a flexible distortion metric and enables bit-rate control using a single model instance.
arXiv Detail & Related papers (2021-05-16T20:53:31Z) - Lossless Compression with Latent Variable Models [4.289574109162585]
We use latent variable models, which we call 'bits back with asymmetric numeral systems' (BB-ANS)
The method involves interleaving encode and decode steps, and achieves an optimal rate when compressing batches of data.
We describe 'Craystack', a modular software framework which we have developed for rapid prototyping of compression using deep generative models.
arXiv Detail & Related papers (2021-04-21T14:03:05Z) - Saliency Driven Perceptual Image Compression [6.201592931432016]
The paper demonstrates that the popularly used evaluations metrics such as MS-SSIM and PSNR are inadequate for judging the performance of image compression techniques.
A new metric is proposed, which is learned on perceptual similarity data specific to image compression.
The model not only generates images which are visually better but also gives superior performance for subsequent computer vision tasks.
arXiv Detail & Related papers (2020-02-12T13:43:17Z) - Learning End-to-End Lossy Image Compression: A Benchmark [90.35363142246806]
We first conduct a comprehensive literature survey of learned image compression methods.
We describe milestones in cutting-edge learned image-compression methods, review a broad range of existing works, and provide insights into their historical development routes.
By introducing a coarse-to-fine hyperprior model for entropy estimation and signal reconstruction, we achieve improved rate-distortion performance.
arXiv Detail & Related papers (2020-02-10T13:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.