Early Exit or Not: Resource-Efficient Blind Quality Enhancement for
Compressed Images
- URL: http://arxiv.org/abs/2006.16581v5
- Date: Mon, 12 Oct 2020 08:23:58 GMT
- Title: Early Exit or Not: Resource-Efficient Blind Quality Enhancement for
Compressed Images
- Authors: Qunliang Xing, Mai Xu, Tianyi Li, Zhenyu Guan
- Abstract summary: Lossy image compression is pervasively conducted to save communication bandwidth, resulting in undesirable compression artifacts.
We propose a resource-efficient blind quality enhancement (RBQE) approach for compressed images.
Our approach can automatically decide to terminate or continue enhancement according to the assessed quality of enhanced images.
- Score: 54.40852143927333
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lossy image compression is pervasively conducted to save communication
bandwidth, resulting in undesirable compression artifacts. Recently, extensive
approaches have been proposed to reduce image compression artifacts at the
decoder side; however, they require a series of architecture-identical models
to process images with different quality, which are inefficient and
resource-consuming. Besides, it is common in practice that compressed images
are with unknown quality and it is intractable for existing approaches to
select a suitable model for blind quality enhancement. In this paper, we
propose a resource-efficient blind quality enhancement (RBQE) approach for
compressed images. Specifically, our approach blindly and progressively
enhances the quality of compressed images through a dynamic deep neural network
(DNN), in which an early-exit strategy is embedded. Then, our approach can
automatically decide to terminate or continue enhancement according to the
assessed quality of enhanced images. Consequently, slight artifacts can be
removed in a simpler and faster process, while the severe artifacts can be
further removed in a more elaborate process. Extensive experiments demonstrate
that our RBQE approach achieves state-of-the-art performance in terms of both
blind quality enhancement and resource efficiency. The code is available at
https://github.com/RyanXingQL/RBQE.
Related papers
- Semantic Ensemble Loss and Latent Refinement for High-Fidelity Neural Image Compression [58.618625678054826]
This study presents an enhanced neural compression method designed for optimal visual fidelity.
We have trained our model with a sophisticated semantic ensemble loss, integrating Charbonnier loss, perceptual loss, style loss, and a non-binary adversarial loss.
Our empirical findings demonstrate that this approach significantly improves the statistical fidelity of neural image compression.
arXiv Detail & Related papers (2024-01-25T08:11:27Z) - Transferable Learned Image Compression-Resistant Adversarial Perturbations [66.46470251521947]
Adversarial attacks can readily disrupt the image classification system, revealing the vulnerability of DNN-based recognition tasks.
We introduce a new pipeline that targets image classification models that utilize learned image compressors as pre-processing modules.
arXiv Detail & Related papers (2024-01-06T03:03:28Z) - VCISR: Blind Single Image Super-Resolution with Video Compression
Synthetic Data [18.877077302923713]
We present a video compression-based degradation model to synthesize low-resolution image data in the blind SISR task.
Our proposed image synthesizing method is widely applicable to existing image datasets.
By introducing video coding artifacts to SISR degradation models, neural networks can super-resolve images with the ability to restore video compression degradations.
arXiv Detail & Related papers (2023-11-02T05:24:19Z) - Extreme Image Compression using Fine-tuned VQGANs [43.43014096929809]
We introduce vector quantization (VQ)-based generative models into the image compression domain.
The codebook learned by the VQGAN model yields a strong expressive capacity.
The proposed framework outperforms state-of-the-art codecs in terms of perceptual quality-oriented metrics.
arXiv Detail & Related papers (2023-07-17T06:14:19Z) - Perceptual Quality Assessment for Fine-Grained Compressed Images [38.615746092795625]
We propose a full-reference image quality assessment (FR-IQA) method for compressed images of fine-grained levels.
The proposed method is validated on the fine-grained compression image quality assessment (FGIQA) database.
arXiv Detail & Related papers (2022-06-08T12:56:45Z) - Neural JPEG: End-to-End Image Compression Leveraging a Standard JPEG
Encoder-Decoder [73.48927855855219]
We propose a system that learns to improve the encoding performance by enhancing its internal neural representations on both the encoder and decoder ends.
Experiments demonstrate that our approach successfully improves the rate-distortion performance over JPEG across various quality metrics.
arXiv Detail & Related papers (2022-01-27T20:20:03Z) - Learning a Single Model with a Wide Range of Quality Factors for JPEG
Image Artifacts Removal [24.25688335628976]
Lossy compression brings artifacts into the compressed image and degrades the visual quality.
In this paper, we propose a highly robust compression artifacts removal network.
Our proposed network is a single model approach that can be trained for handling a wide range of quality factors.
arXiv Detail & Related papers (2020-09-15T08:16:58Z) - Quantization Guided JPEG Artifact Correction [69.04777875711646]
We develop a novel architecture for artifact correction using the JPEG files quantization matrix.
This allows our single model to achieve state-of-the-art performance over models trained for specific quality settings.
arXiv Detail & Related papers (2020-04-17T00:10:08Z) - Discernible Image Compression [124.08063151879173]
This paper aims to produce compressed images by pursuing both appearance and perceptual consistency.
Based on the encoder-decoder framework, we propose using a pre-trained CNN to extract features of the original and compressed images.
Experiments on benchmarks demonstrate that images compressed by using the proposed method can also be well recognized by subsequent visual recognition and detection models.
arXiv Detail & Related papers (2020-02-17T07:35:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.