JND-Based Perceptual Optimization For Learned Image Compression
- URL: http://arxiv.org/abs/2302.13092v1
- Date: Sat, 25 Feb 2023 14:49:09 GMT
- Title: JND-Based Perceptual Optimization For Learned Image Compression
- Authors: Feng Ding, Jian Jin, Lili Meng, Weisi Lin
- Abstract summary: We propose a JND-based perceptual quality loss for learned image compression schemes.
We show that the proposed method has led to better perceptual quality than the baseline model under the same bit rate.
- Score: 42.822121565430926
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, learned image compression schemes have achieved remarkable
improvements in image fidelity (e.g., PSNR and MS-SSIM) compared to
conventional hybrid image coding ones due to their high-efficiency non-linear
transform, end-to-end optimization frameworks, etc. However, few of them take
the Just Noticeable Difference (JND) characteristic of the Human Visual System
(HVS) into account and optimize learned image compression towards perceptual
quality. To address this issue, a JND-based perceptual quality loss is
proposed. Considering that the amounts of distortion in the compressed image at
different training epochs under different Quantization Parameters (QPs) are
different, we develop a distortion-aware adjustor. After combining them
together, we can better assign the distortion in the compressed image with the
guidance of JND to preserve the high perceptual quality. All these designs
enable the proposed method to be flexibly applied to various learned image
compression schemes with high scalability and plug-and-play advantages.
Experimental results on the Kodak dataset demonstrate that the proposed method
has led to better perceptual quality than the baseline model under the same bit
rate.
Related papers
- Unifying Generation and Compression: Ultra-low bitrate Image Coding Via
Multi-stage Transformer [35.500720262253054]
This paper introduces a novel Unified Image Generation-Compression (UIGC) paradigm, merging the processes of generation and compression.
A key feature of the UIGC framework is the adoption of vector-quantized (VQ) image models for tokenization.
Experiments demonstrate the superiority of the proposed UIGC framework over existing codecs in perceptual quality and human perception.
arXiv Detail & Related papers (2024-03-06T14:27:02Z) - Semantic Ensemble Loss and Latent Refinement for High-Fidelity Neural Image Compression [58.618625678054826]
This study presents an enhanced neural compression method designed for optimal visual fidelity.
We have trained our model with a sophisticated semantic ensemble loss, integrating Charbonnier loss, perceptual loss, style loss, and a non-binary adversarial loss.
Our empirical findings demonstrate that this approach significantly improves the statistical fidelity of neural image compression.
arXiv Detail & Related papers (2024-01-25T08:11:27Z) - Transferable Learned Image Compression-Resistant Adversarial Perturbations [66.46470251521947]
Adversarial attacks can readily disrupt the image classification system, revealing the vulnerability of DNN-based recognition tasks.
We introduce a new pipeline that targets image classification models that utilize learned image compressors as pre-processing modules.
arXiv Detail & Related papers (2024-01-06T03:03:28Z) - High-Perceptual Quality JPEG Decoding via Posterior Sampling [13.238373528922194]
We propose a different paradigm for JPEG artifact correction.
We aim to obtain sharp, detailed and visually reconstructed images, while being consistent with the compressed input.
Our solution offers a diverse set of plausible and fast reconstructions for a given input with perfect consistency.
arXiv Detail & Related papers (2022-11-21T19:47:59Z) - High-Fidelity Variable-Rate Image Compression via Invertible Activation
Transformation [24.379052026260034]
We propose the Invertible Activation Transformation (IAT) module to tackle the issue of high-fidelity fine variable-rate image compression.
IAT and QLevel together give the image compression model the ability of fine variable-rate control while better maintaining the image fidelity.
Our method outperforms the state-of-the-art variable-rate image compression method by a large margin, especially after multiple re-encodings.
arXiv Detail & Related papers (2022-09-12T07:14:07Z) - Estimating the Resize Parameter in End-to-end Learned Image Compression [50.20567320015102]
We describe a search-free resizing framework that can further improve the rate-distortion tradeoff of recent learned image compression models.
Our results show that our new resizing parameter estimation framework can provide Bjontegaard-Delta rate (BD-rate) improvement of about 10% against leading perceptual quality engines.
arXiv Detail & Related papers (2022-04-26T01:35:02Z) - Neural JPEG: End-to-End Image Compression Leveraging a Standard JPEG
Encoder-Decoder [73.48927855855219]
We propose a system that learns to improve the encoding performance by enhancing its internal neural representations on both the encoder and decoder ends.
Experiments demonstrate that our approach successfully improves the rate-distortion performance over JPEG across various quality metrics.
arXiv Detail & Related papers (2022-01-27T20:20:03Z) - Quantization Guided JPEG Artifact Correction [69.04777875711646]
We develop a novel architecture for artifact correction using the JPEG files quantization matrix.
This allows our single model to achieve state-of-the-art performance over models trained for specific quality settings.
arXiv Detail & Related papers (2020-04-17T00:10:08Z) - An End-to-End Joint Learning Scheme of Image Compression and Quality
Enhancement with Improved Entropy Minimization [43.878329556261924]
We propose a novel joint learning scheme of image compression and quality enhancement, called JointIQ-Net.
Our proposed JointIQ-Net combines an image compression sub-network and a quality enhancement sub-network in a cascade, both of which are end-to-end trained in a combined manner within the JointIQ-Net.
arXiv Detail & Related papers (2019-12-30T05:10:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.