An End-to-End Joint Learning Scheme of Image Compression and Quality
Enhancement with Improved Entropy Minimization
- URL: http://arxiv.org/abs/1912.12817v2
- Date: Fri, 13 Mar 2020 08:45:53 GMT
- Title: An End-to-End Joint Learning Scheme of Image Compression and Quality
Enhancement with Improved Entropy Minimization
- Authors: Jooyoung Lee, Seunghyun Cho, Munchurl Kim
- Abstract summary: We propose a novel joint learning scheme of image compression and quality enhancement, called JointIQ-Net.
Our proposed JointIQ-Net combines an image compression sub-network and a quality enhancement sub-network in a cascade, both of which are end-to-end trained in a combined manner within the JointIQ-Net.
- Score: 43.878329556261924
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, learned image compression methods have been actively studied. Among
them, entropy-minimization based approaches have achieved superior results
compared to conventional image codecs such as BPG and JPEG2000. However, the
quality enhancement and rate-minimization are conflictively coupled in the
process of image compression. That is, maintaining high image quality entails
less compression and vice versa. However, by jointly training separate quality
enhancement in conjunction with image compression, the coding efficiency can be
improved. In this paper, we propose a novel joint learning scheme of image
compression and quality enhancement, called JointIQ-Net, as well as entropy
model improvement, thus achieving significantly improved coding efficiency
against the previous methods. Our proposed JointIQ-Net combines an image
compression sub-network and a quality enhancement sub-network in a cascade,
both of which are end-to-end trained in a combined manner within the
JointIQ-Net. Also the JointIQ-Net benefits from improved entropy-minimization
that newly adopts a Gussian Mixture Model (GMM) and further exploits global
context to estimate the probabilities of latent representations. In order to
show the effectiveness of our proposed JointIQ-Net, extensive experiments have
been performed, and showed that the JointIQ-Net achieves a remarkable
performance improvement in coding efficiency in terms of both PSNR and MS-SSIM,
compared to the previous learned image compression methods and the conventional
codecs such as VVC Intra (VTM 7.1), BPG, and JPEG2000. To the best of our
knowledge, this is the first end-to-end optimized image compression method that
outperforms VTM 7.1 (Intra), the latest reference software of the VVC standard,
in terms of the PSNR and MS-SSIM.
Related papers
- Unifying Generation and Compression: Ultra-low bitrate Image Coding Via
Multi-stage Transformer [35.500720262253054]
This paper introduces a novel Unified Image Generation-Compression (UIGC) paradigm, merging the processes of generation and compression.
A key feature of the UIGC framework is the adoption of vector-quantized (VQ) image models for tokenization.
Experiments demonstrate the superiority of the proposed UIGC framework over existing codecs in perceptual quality and human perception.
arXiv Detail & Related papers (2024-03-06T14:27:02Z) - Semantic Ensemble Loss and Latent Refinement for High-Fidelity Neural Image Compression [58.618625678054826]
This study presents an enhanced neural compression method designed for optimal visual fidelity.
We have trained our model with a sophisticated semantic ensemble loss, integrating Charbonnier loss, perceptual loss, style loss, and a non-binary adversarial loss.
Our empirical findings demonstrate that this approach significantly improves the statistical fidelity of neural image compression.
arXiv Detail & Related papers (2024-01-25T08:11:27Z) - Transferable Learned Image Compression-Resistant Adversarial Perturbations [66.46470251521947]
Adversarial attacks can readily disrupt the image classification system, revealing the vulnerability of DNN-based recognition tasks.
We introduce a new pipeline that targets image classification models that utilize learned image compressors as pre-processing modules.
arXiv Detail & Related papers (2024-01-06T03:03:28Z) - Extreme Image Compression using Fine-tuned VQGANs [43.43014096929809]
We introduce vector quantization (VQ)-based generative models into the image compression domain.
The codebook learned by the VQGAN model yields a strong expressive capacity.
The proposed framework outperforms state-of-the-art codecs in terms of perceptual quality-oriented metrics.
arXiv Detail & Related papers (2023-07-17T06:14:19Z) - JND-Based Perceptual Optimization For Learned Image Compression [42.822121565430926]
We propose a JND-based perceptual quality loss for learned image compression schemes.
We show that the proposed method has led to better perceptual quality than the baseline model under the same bit rate.
arXiv Detail & Related papers (2023-02-25T14:49:09Z) - High-Fidelity Variable-Rate Image Compression via Invertible Activation
Transformation [24.379052026260034]
We propose the Invertible Activation Transformation (IAT) module to tackle the issue of high-fidelity fine variable-rate image compression.
IAT and QLevel together give the image compression model the ability of fine variable-rate control while better maintaining the image fidelity.
Our method outperforms the state-of-the-art variable-rate image compression method by a large margin, especially after multiple re-encodings.
arXiv Detail & Related papers (2022-09-12T07:14:07Z) - Neural JPEG: End-to-End Image Compression Leveraging a Standard JPEG
Encoder-Decoder [73.48927855855219]
We propose a system that learns to improve the encoding performance by enhancing its internal neural representations on both the encoder and decoder ends.
Experiments demonstrate that our approach successfully improves the rate-distortion performance over JPEG across various quality metrics.
arXiv Detail & Related papers (2022-01-27T20:20:03Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Early Exit or Not: Resource-Efficient Blind Quality Enhancement for
Compressed Images [54.40852143927333]
Lossy image compression is pervasively conducted to save communication bandwidth, resulting in undesirable compression artifacts.
We propose a resource-efficient blind quality enhancement (RBQE) approach for compressed images.
Our approach can automatically decide to terminate or continue enhancement according to the assessed quality of enhanced images.
arXiv Detail & Related papers (2020-06-30T07:38:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.