Learning Scalable $\ell_\infty$-constrained Near-lossless Image
Compression via Joint Lossy Image and Residual Compression
- URL: http://arxiv.org/abs/2103.17015v1
- Date: Wed, 31 Mar 2021 11:53:36 GMT
- Title: Learning Scalable $\ell_\infty$-constrained Near-lossless Image
Compression via Joint Lossy Image and Residual Compression
- Authors: Yuanchao Bai, Xianming Liu, Wangmeng Zuo, Yaowei Wang, Xiangyang Ji
- Abstract summary: We propose a novel framework for learning $ell_infty$-constrained near-lossless image compression.
We derive the probability model of the quantized residual by quantizing the learned probability model of the original residual.
- Score: 118.89112502350177
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel joint lossy image and residual compression framework for
learning $\ell_\infty$-constrained near-lossless image compression.
Specifically, we obtain a lossy reconstruction of the raw image through lossy
image compression and uniformly quantize the corresponding residual to satisfy
a given tight $\ell_\infty$ error bound. Suppose that the error bound is zero,
i.e., lossless image compression, we formulate the joint optimization problem
of compressing both the lossy image and the original residual in terms of
variational auto-encoders and solve it with end-to-end training. To achieve
scalable compression with the error bound larger than zero, we derive the
probability model of the quantized residual by quantizing the learned
probability model of the original residual, instead of training multiple
networks. We further correct the bias of the derived probability model caused
by the context mismatch between training and inference. Finally, the quantized
residual is encoded according to the bias-corrected probability model and is
concatenated with the bitstream of the compressed lossy image. Experimental
results demonstrate that our near-lossless codec achieves the state-of-the-art
performance for lossless and near-lossless image compression, and achieves
competitive PSNR while much smaller $\ell_\infty$ error compared with lossy
image codecs at high bit rates.
Related papers
- Semantic Ensemble Loss and Latent Refinement for High-Fidelity Neural Image Compression [58.618625678054826]
This study presents an enhanced neural compression method designed for optimal visual fidelity.
We have trained our model with a sophisticated semantic ensemble loss, integrating Charbonnier loss, perceptual loss, style loss, and a non-binary adversarial loss.
Our empirical findings demonstrate that this approach significantly improves the statistical fidelity of neural image compression.
arXiv Detail & Related papers (2024-01-25T08:11:27Z) - Lossy and Lossless (L$^2$) Post-training Model Size Compression [12.926354646945397]
We propose a post-training model size compression method that combines lossy and lossless compression in a unified way.
Our method can achieve a stable $10times$ compression ratio without sacrificing accuracy and a $20times$ compression ratio with minor accuracy loss in a short time.
arXiv Detail & Related papers (2023-08-08T14:10:16Z) - Extreme Image Compression using Fine-tuned VQGANs [43.43014096929809]
We introduce vector quantization (VQ)-based generative models into the image compression domain.
The codebook learned by the VQGAN model yields a strong expressive capacity.
The proposed framework outperforms state-of-the-art codecs in terms of perceptual quality-oriented metrics.
arXiv Detail & Related papers (2023-07-17T06:14:19Z) - Deep Lossy Plus Residual Coding for Lossless and Near-lossless Image
Compression [85.93207826513192]
We propose a unified and powerful deep lossy plus residual (DLPR) coding framework for both lossless and near-lossless image compression.
We solve the joint lossy and residual compression problem in the approach of VAEs.
In the near-lossless mode, we quantize the original residuals to satisfy a given $ell_infty$ error bound.
arXiv Detail & Related papers (2022-09-11T12:11:56Z) - Estimating the Resize Parameter in End-to-end Learned Image Compression [50.20567320015102]
We describe a search-free resizing framework that can further improve the rate-distortion tradeoff of recent learned image compression models.
Our results show that our new resizing parameter estimation framework can provide Bjontegaard-Delta rate (BD-rate) improvement of about 10% against leading perceptual quality engines.
arXiv Detail & Related papers (2022-04-26T01:35:02Z) - Modeling Image Quantization Tradeoffs for Optimal Compression [0.0]
Lossy compression algorithms target tradeoffs by quantizating high frequency data to increase compression rates.
We propose a new method of optimizing quantization tables using Deep Learning and a minimax loss function.
arXiv Detail & Related papers (2021-12-14T07:35:22Z) - Modeling Lost Information in Lossy Image Compression [72.69327382643549]
Lossy image compression is one of the most commonly used operators for digital images.
We propose a novel invertible framework called Invertible Lossy Compression (ILC) to largely mitigate the information loss problem.
arXiv Detail & Related papers (2020-06-22T04:04:56Z) - Learning Better Lossless Compression Using Lossy Compression [100.50156325096611]
We leverage the powerful lossy image compression algorithm BPG to build a lossless image compression system.
We model the distribution of the residual with a convolutional neural network-based probabilistic model that is conditioned on the BPG reconstruction.
Finally, the image is stored using the concatenation of the bitstreams produced by BPG and the learned residual coder.
arXiv Detail & Related papers (2020-03-23T11:21:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.