Deep Lossy Plus Residual Coding for Lossless and Near-lossless Image
Compression
- URL: http://arxiv.org/abs/2209.04847v2
- Date: Thu, 11 Jan 2024 04:31:31 GMT
- Title: Deep Lossy Plus Residual Coding for Lossless and Near-lossless Image
Compression
- Authors: Yuanchao Bai, Xianming Liu, Kai Wang, Xiangyang Ji, Xiaolin Wu, Wen
Gao
- Abstract summary: We propose a unified and powerful deep lossy plus residual (DLPR) coding framework for both lossless and near-lossless image compression.
We solve the joint lossy and residual compression problem in the approach of VAEs.
In the near-lossless mode, we quantize the original residuals to satisfy a given $ell_infty$ error bound.
- Score: 85.93207826513192
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lossless and near-lossless image compression is of paramount importance to
professional users in many technical fields, such as medicine, remote sensing,
precision engineering and scientific research. But despite rapidly growing
research interests in learning-based image compression, no published method
offers both lossless and near-lossless modes. In this paper, we propose a
unified and powerful deep lossy plus residual (DLPR) coding framework for both
lossless and near-lossless image compression. In the lossless mode, the DLPR
coding system first performs lossy compression and then lossless coding of
residuals. We solve the joint lossy and residual compression problem in the
approach of VAEs, and add autoregressive context modeling of the residuals to
enhance lossless compression performance. In the near-lossless mode, we
quantize the original residuals to satisfy a given $\ell_\infty$ error bound,
and propose a scalable near-lossless compression scheme that works for variable
$\ell_\infty$ bounds instead of training multiple networks. To expedite the
DLPR coding, we increase the degree of algorithm parallelization by a novel
design of coding context, and accelerate the entropy coding with adaptive
residual interval. Experimental results demonstrate that the DLPR coding system
achieves both the state-of-the-art lossless and near-lossless image compression
performance with competitive coding speed.
Related papers
- Large Language Models for Lossless Image Compression: Next-Pixel Prediction in Language Space is All You Need [53.584140947828004]
Language large model (LLM) with unprecedented intelligence is a general-purpose lossless compressor for various data modalities.
We propose P$2$-LLM, a next-pixel prediction-based LLM, which integrates various elaborated insights and methodologies.
Experiments on benchmark datasets demonstrate that P$2$-LLM can beat SOTA classical and learned codecs.
arXiv Detail & Related papers (2024-11-19T12:15:40Z) - Low-complexity Deep Video Compression with A Distributed Coding
Architecture [4.5885672744218]
Prevalent predictive coding-based video compression methods rely on a heavy encoder to reduce temporal redundancy.
Traditional distributed coding methods suffer from a substantial performance gap to predictive coding ones.
We propose the first end-to-end distributed deep video compression framework to improve rate-distortion performance.
arXiv Detail & Related papers (2023-03-21T05:34:04Z) - Improving Multi-generation Robustness of Learned Image Compression [16.86614420872084]
We show that LIC can achieve comparable performance to the first compression of BPG even after 50 times reencoding without any change of the network structure.
arXiv Detail & Related papers (2022-10-31T03:26:11Z) - Reducing Redundancy in the Bottleneck Representation of the Autoencoders [98.78384185493624]
Autoencoders are a type of unsupervised neural networks, which can be used to solve various tasks.
We propose a scheme to explicitly penalize feature redundancies in the bottleneck representation.
We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST.
arXiv Detail & Related papers (2022-02-09T18:48:02Z) - Learning Scalable $\ell_\infty$-constrained Near-lossless Image
Compression via Joint Lossy Image and Residual Compression [118.89112502350177]
We propose a novel framework for learning $ell_infty$-constrained near-lossless image compression.
We derive the probability model of the quantized residual by quantizing the learned probability model of the original residual.
arXiv Detail & Related papers (2021-03-31T11:53:36Z) - Modeling Lost Information in Lossy Image Compression [72.69327382643549]
Lossy image compression is one of the most commonly used operators for digital images.
We propose a novel invertible framework called Invertible Lossy Compression (ILC) to largely mitigate the information loss problem.
arXiv Detail & Related papers (2020-06-22T04:04:56Z) - Learning Better Lossless Compression Using Lossy Compression [100.50156325096611]
We leverage the powerful lossy image compression algorithm BPG to build a lossless image compression system.
We model the distribution of the residual with a convolutional neural network-based probabilistic model that is conditioned on the BPG reconstruction.
Finally, the image is stored using the concatenation of the bitstreams produced by BPG and the learned residual coder.
arXiv Detail & Related papers (2020-03-23T11:21:52Z) - A Unified End-to-End Framework for Efficient Deep Image Compression [35.156677716140635]
We propose a unified framework called Efficient Deep Image Compression (EDIC) based on three new technologies.
Specifically, we design an auto-encoder style network for learning based image compression.
Our EDIC method can also be readily incorporated with the Deep Video Compression (DVC) framework to further improve the video compression performance.
arXiv Detail & Related papers (2020-02-09T14:21:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.