When No-Reference Image Quality Models Meet MAP Estimation in Diffusion Latents
- URL: http://arxiv.org/abs/2403.06406v2
- Date: Wed, 15 Jan 2025 12:36:24 GMT
- Title: When No-Reference Image Quality Models Meet MAP Estimation in Diffusion Latents
- Authors: Weixia Zhang, Dingquan Li, Guangtao Zhai, Xiaokang Yang, Kede Ma,
- Abstract summary: No-reference image quality assessment (NR-IQA) models can effectively quantify perceived image quality.
We show that NR-IQA models can be plugged into the maximum a posteriori (MAP) estimation framework for image enhancement.
- Score: 92.45867913876691
- License:
- Abstract: Contemporary no-reference image quality assessment (NR-IQA) models can effectively quantify perceived image quality, often achieving strong correlations with human perceptual scores on standard IQA benchmarks. Yet, limited efforts have been devoted to treating NR-IQA models as natural image priors for real-world image enhancement, and consequently comparing them from a perceptual optimization standpoint. In this work, we show -- for the first time -- that NR-IQA models can be plugged into the maximum a posteriori (MAP) estimation framework for image enhancement. This is achieved by performing gradient ascent in the diffusion latent space rather than in the raw pixel domain, leveraging a pretrained differentiable and bijective diffusion process. Likely, different NR-IQA models lead to different enhanced outputs, which in turn provides a new computational means of comparing them. Unlike conventional correlation-based measures, our comparison method offers complementary insights into the respective strengths and weaknesses of the competing NR-IQA models in perceptual optimization scenarios. Additionally, we aim to improve the best-performing NR-IQA model in diffusion latent MAP estimation by incorporating the advantages of other top-performing methods. The resulting model delivers noticeably better results in enhancing real-world images afflicted by unknown and complex distortions, all preserving a high degree of image fidelity.
Related papers
- IQA-Adapter: Exploring Knowledge Transfer from Image Quality Assessment to Diffusion-based Generative Models [0.5356944479760104]
We introduce IQA-Adapter, a novel architecture that conditions generation on target quality levels by learning the relationship between images and quality scores.
IQA-Adapter achieves up to a 10% improvement across multiple objective metrics, as confirmed by a subjective study.
Our quality-aware methods also provide insights into the adversarial robustness of IQA models.
arXiv Detail & Related papers (2024-12-02T18:40:19Z) - Sliced Maximal Information Coefficient: A Training-Free Approach for Image Quality Assessment Enhancement [12.628718661568048]
We aim to explore a generalized human visual attention estimation strategy to mimic the process of human quality rating.
In particular, we model human attention generation by measuring the statistical dependency between the degraded image and the reference image.
Experimental results verify the performance of existing IQA models can be consistently improved when our attention module is incorporated.
arXiv Detail & Related papers (2024-08-19T11:55:32Z) - Adaptive Image Quality Assessment via Teaching Large Multimodal Model to Compare [99.57567498494448]
We introduce Compare2Score, an all-around LMM-based no-reference IQA model.
During training, we generate scaled-up comparative instructions by comparing images from the same IQA dataset.
Experiments on nine IQA datasets validate that the Compare2Score effectively bridges text-defined comparative levels during training.
arXiv Detail & Related papers (2024-05-29T17:26:09Z) - Defense Against Adversarial Attacks on No-Reference Image Quality Models with Gradient Norm Regularization [18.95463890154886]
No-Reference Image Quality Assessment (NR-IQA) models play a crucial role in the media industry.
These models are found to be vulnerable to adversarial attacks, which introduce imperceptible perturbations to input images.
We propose a defense method to improve the stability in predicted scores when attacked by small perturbations.
arXiv Detail & Related papers (2024-03-18T01:11:53Z) - Black-box Adversarial Attacks Against Image Quality Assessment Models [16.11900427447442]
The goal of No-Reference Image Quality Assessment (NR-IQA) is to predict the perceptual quality of an image in line with its subjective evaluation.
This paper makes the first attempt to explore the black-box adversarial attacks on NR-IQA models.
arXiv Detail & Related papers (2024-02-27T14:16:39Z) - Diffusion Model Based Visual Compensation Guidance and Visual Difference Analysis for No-Reference Image Quality Assessment [78.21609845377644]
We propose a novel class of state-of-the-art (SOTA) generative model, which exhibits the capability to model intricate relationships.
We devise a new diffusion restoration network that leverages the produced enhanced image and noise-containing images.
Two visual evaluation branches are designed to comprehensively analyze the obtained high-level feature information.
arXiv Detail & Related papers (2024-02-22T09:39:46Z) - Less is More: Learning Reference Knowledge Using No-Reference Image
Quality Assessment [58.09173822651016]
We argue that it is possible to learn reference knowledge under the No-Reference Image Quality Assessment setting.
We propose a new framework to learn comparative knowledge from non-aligned reference images.
Experiments on eight standard NR-IQA datasets demonstrate the superior performance to the state-of-the-art NR-IQA methods.
arXiv Detail & Related papers (2023-12-01T13:56:01Z) - Conformer and Blind Noisy Students for Improved Image Quality Assessment [80.57006406834466]
Learning-based approaches for perceptual image quality assessment (IQA) usually require both the distorted and reference image for measuring the perceptual quality accurately.
In this work, we explore the performance of transformer-based full-reference IQA models.
We also propose a method for IQA based on semi-supervised knowledge distillation from full-reference teacher models into blind student models.
arXiv Detail & Related papers (2022-04-27T10:21:08Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z) - Comparison of Image Quality Models for Optimization of Image Processing
Systems [41.57409136781606]
We use eleven full-reference IQA models to train deep neural networks for four low-level vision tasks.
Subjective testing on the optimized images allows us to rank the competing models in terms of their perceptual performance.
arXiv Detail & Related papers (2020-05-04T09:26:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.