Unified learning-based lossy and lossless JPEG recompression
- URL: http://arxiv.org/abs/2312.02705v1
- Date: Tue, 5 Dec 2023 12:07:27 GMT
- Title: Unified learning-based lossy and lossless JPEG recompression
- Authors: Jianghui Zhang, Yuanyuan Wang, Lina Guo, Jixiang Luo, Tongda Xu, Yan
Wang, Zhi Wang, Hongwei Qin
- Abstract summary: We propose a unified lossly and lossless JPEG recompression framework, which consists of learned quantization table and Markovian hierarchical variational autoencoders.
Experiments show that our method can achieve arbitrarily low distortion when the JPEG is close to the upper bound.
- Score: 15.922937139019547
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: JPEG is still the most widely used image compression algorithm. Most image
compression algorithms only consider uncompressed original image, while
ignoring a large number of already existing JPEG images. Recently, JPEG
recompression approaches have been proposed to further reduce the size of JPEG
files. However, those methods only consider JPEG lossless recompression, which
is just a special case of the rate-distortion theorem. In this paper, we
propose a unified lossly and lossless JPEG recompression framework, which
consists of learned quantization table and Markovian hierarchical variational
autoencoders. Experiments show that our method can achieve arbitrarily low
distortion when the bitrate is close to the upper bound, namely the bitrate of
the lossless compression model. To the best of our knowledge, this is the first
learned method that bridges the gap between lossy and lossless recompression of
JPEG images.
Related papers
- Distribution prediction for image compression: An experimental
re-compressor for JPEG images [1.8416014644193066]
Using a JPEG image as an input the algorithm partially decodes the signal to obtain quantized DCT coefficients and then re-compress them in a more effective way.
arXiv Detail & Related papers (2023-10-16T15:33:58Z) - Learned Lossless Compression for JPEG via Frequency-Domain Prediction [50.20577108662153]
We propose a novel framework for learned lossless compression of JPEG images.
To enable learning in the frequency domain, DCT coefficients are partitioned into groups to utilize implicit local redundancy.
An autoencoder-like architecture is designed based on the weight-shared blocks to realize entropy modeling of grouped DCT coefficients.
arXiv Detail & Related papers (2023-03-05T13:15:28Z) - Learned Lossless JPEG Transcoding via Joint Lossy and Residual
Compression [21.205453851414248]
We propose a new framework to recompress the compressed JPEG image in the DCT domain.
Our proposed framework can achieve about 21.49% bits saving in average based on JPEG compression.
Our experiments on multiple datasets have demonstrated that our proposed framework can achieve about 21.49% bits saving in average based on JPEG compression.
arXiv Detail & Related papers (2022-08-24T17:12:00Z) - Practical Learned Lossless JPEG Recompression with Multi-Level
Cross-Channel Entropy Model in the DCT Domain [10.655855413391324]
We propose a deep learning based JPEG recompression method that operates on DCT domain.
Experiments show that our method achieves state-of-the-art performance compared with traditional JPEG recompression methods.
arXiv Detail & Related papers (2022-03-30T14:36:13Z) - Neural JPEG: End-to-End Image Compression Leveraging a Standard JPEG
Encoder-Decoder [73.48927855855219]
We propose a system that learns to improve the encoding performance by enhancing its internal neural representations on both the encoder and decoder ends.
Experiments demonstrate that our approach successfully improves the rate-distortion performance over JPEG across various quality metrics.
arXiv Detail & Related papers (2022-01-27T20:20:03Z) - Towards Robust Data Hiding Against (JPEG) Compression: A
Pseudo-Differentiable Deep Learning Approach [78.05383266222285]
It is still an open challenge to achieve the goal of data hiding that can be against these compressions.
Deep learning has shown large success in data hiding, while non-differentiability of JPEG makes it challenging to train a deep pipeline for improving robustness against lossy compression.
In this work, we propose a simple yet effective approach to address all the above limitations at once.
arXiv Detail & Related papers (2020-12-30T12:30:09Z) - Learning to Improve Image Compression without Changing the Standard
Decoder [100.32492297717056]
We propose learning to improve the encoding performance with the standard decoder.
Specifically, a frequency-domain pre-editing method is proposed to optimize the distribution of DCT coefficients.
We do not modify the JPEG decoder and therefore our approach is applicable when viewing images with the widely used standard JPEG decoder.
arXiv Detail & Related papers (2020-09-27T19:24:42Z) - Quantization Guided JPEG Artifact Correction [69.04777875711646]
We develop a novel architecture for artifact correction using the JPEG files quantization matrix.
This allows our single model to achieve state-of-the-art performance over models trained for specific quality settings.
arXiv Detail & Related papers (2020-04-17T00:10:08Z) - Learning Better Lossless Compression Using Lossy Compression [100.50156325096611]
We leverage the powerful lossy image compression algorithm BPG to build a lossless image compression system.
We model the distribution of the residual with a convolutional neural network-based probabilistic model that is conditioned on the BPG reconstruction.
Finally, the image is stored using the concatenation of the bitstreams produced by BPG and the learned residual coder.
arXiv Detail & Related papers (2020-03-23T11:21:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.