Learning to Improve Image Compression without Changing the Standard
Decoder
- URL: http://arxiv.org/abs/2009.12927v3
- Date: Fri, 23 Oct 2020 20:48:11 GMT
- Title: Learning to Improve Image Compression without Changing the Standard
Decoder
- Authors: Yannick Str\"umpler, Ren Yang, Radu Timofte
- Abstract summary: We propose learning to improve the encoding performance with the standard decoder.
Specifically, a frequency-domain pre-editing method is proposed to optimize the distribution of DCT coefficients.
We do not modify the JPEG decoder and therefore our approach is applicable when viewing images with the widely used standard JPEG decoder.
- Score: 100.32492297717056
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years we have witnessed an increasing interest in applying Deep
Neural Networks (DNNs) to improve the rate-distortion performance in image
compression. However, the existing approaches either train a post-processing
DNN on the decoder side, or propose learning for image compression in an
end-to-end manner. This way, the trained DNNs are required in the decoder,
leading to the incompatibility to the standard image decoders (e.g., JPEG) in
personal computers and mobiles. Therefore, we propose learning to improve the
encoding performance with the standard decoder. In this paper, We work on JPEG
as an example. Specifically, a frequency-domain pre-editing method is proposed
to optimize the distribution of DCT coefficients, aiming at facilitating the
JPEG compression. Moreover, we propose learning the JPEG quantization table
jointly with the pre-editing network. Most importantly, we do not modify the
JPEG decoder and therefore our approach is applicable when viewing images with
the widely used standard JPEG decoder. The experiments validate that our
approach successfully improves the rate-distortion performance of JPEG in terms
of various quality metrics, such as PSNR, MS-SSIM and LPIPS. Visually, this
translates to better overall color retention especially when strong compression
is applied. The codes are available at
https://github.com/YannickStruempler/LearnedJPEG.
Related papers
- JPEG Inspired Deep Learning [4.958744940097937]
Well-crafted JPEG compression can actually improve the performance of deep learning (DL)
We propose JPEG-DL, a novel DL framework that prepends any underlying DNN architecture with a trainable JPEG compression layer.
arXiv Detail & Related papers (2024-10-09T17:23:54Z) - JDEC: JPEG Decoding via Enhanced Continuous Cosine Coefficients [17.437568540883106]
We propose a practical approach to JPEG image decoding, utilizing a local implicit neural representation with continuous cosine formulation.
Our proposed network achieves state-of-the-art performance in flexible color image JPEG artifact removal tasks.
arXiv Detail & Related papers (2024-04-03T03:28:04Z) - Unified learning-based lossy and lossless JPEG recompression [15.922937139019547]
We propose a unified lossly and lossless JPEG recompression framework, which consists of learned quantization table and Markovian hierarchical variational autoencoders.
Experiments show that our method can achieve arbitrarily low distortion when the JPEG is close to the upper bound.
arXiv Detail & Related papers (2023-12-05T12:07:27Z) - Learned Lossless Compression for JPEG via Frequency-Domain Prediction [50.20577108662153]
We propose a novel framework for learned lossless compression of JPEG images.
To enable learning in the frequency domain, DCT coefficients are partitioned into groups to utilize implicit local redundancy.
An autoencoder-like architecture is designed based on the weight-shared blocks to realize entropy modeling of grouped DCT coefficients.
arXiv Detail & Related papers (2023-03-05T13:15:28Z) - Practical Learned Lossless JPEG Recompression with Multi-Level
Cross-Channel Entropy Model in the DCT Domain [10.655855413391324]
We propose a deep learning based JPEG recompression method that operates on DCT domain.
Experiments show that our method achieves state-of-the-art performance compared with traditional JPEG recompression methods.
arXiv Detail & Related papers (2022-03-30T14:36:13Z) - Neural JPEG: End-to-End Image Compression Leveraging a Standard JPEG
Encoder-Decoder [73.48927855855219]
We propose a system that learns to improve the encoding performance by enhancing its internal neural representations on both the encoder and decoder ends.
Experiments demonstrate that our approach successfully improves the rate-distortion performance over JPEG across various quality metrics.
arXiv Detail & Related papers (2022-01-27T20:20:03Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Quantization Guided JPEG Artifact Correction [69.04777875711646]
We develop a novel architecture for artifact correction using the JPEG files quantization matrix.
This allows our single model to achieve state-of-the-art performance over models trained for specific quality settings.
arXiv Detail & Related papers (2020-04-17T00:10:08Z) - Content Adaptive and Error Propagation Aware Deep Video Compression [110.31693187153084]
We propose a content adaptive and error propagation aware video compression system.
Our method employs a joint training strategy by considering the compression performance of multiple consecutive frames instead of a single frame.
Instead of using the hand-crafted coding modes in the traditional compression systems, we design an online encoder updating scheme in our system.
arXiv Detail & Related papers (2020-03-25T09:04:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.