Diffusion Model Based Visual Compensation Guidance and Visual Difference
Analysis for No-Reference Image Quality Assessment
- URL: http://arxiv.org/abs/2402.14401v1
- Date: Thu, 22 Feb 2024 09:39:46 GMT
- Title: Diffusion Model Based Visual Compensation Guidance and Visual Difference
Analysis for No-Reference Image Quality Assessment
- Authors: Zhaoyang Wang, Bo Hu, Mingyang Zhang, Jie Li, Leida Li, Maoguo Gong,
Xinbo Gao
- Abstract summary: We propose a novel class of state-of-the-art (SOTA) generative model, which exhibits the capability to model intricate relationships.
We devise a new diffusion restoration network that leverages the produced enhanced image and noise-containing images.
Two visual evaluation branches are designed to comprehensively analyze the obtained high-level feature information.
- Score: 82.13830107682232
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing free-energy guided No-Reference Image Quality Assessment (NR-IQA)
methods still suffer from finding a balance between learning feature
information at the pixel level of the image and capturing high-level feature
information and the efficient utilization of the obtained high-level feature
information remains a challenge. As a novel class of state-of-the-art (SOTA)
generative model, the diffusion model exhibits the capability to model
intricate relationships, enabling a comprehensive understanding of images and
possessing a better learning of both high-level and low-level visual features.
In view of these, we pioneer the exploration of the diffusion model into the
domain of NR-IQA. Firstly, we devise a new diffusion restoration network that
leverages the produced enhanced image and noise-containing images,
incorporating nonlinear features obtained during the denoising process of the
diffusion model, as high-level visual information. Secondly, two visual
evaluation branches are designed to comprehensively analyze the obtained
high-level feature information. These include the visual compensation guidance
branch, grounded in the transformer architecture and noise embedding strategy,
and the visual difference analysis branch, built on the ResNet architecture and
the residual transposed attention block. Extensive experiments are conducted on
seven public NR-IQA datasets, and the results demonstrate that the proposed
model outperforms SOTA methods for NR-IQA.
Related papers
- Dual-Representation Interaction Driven Image Quality Assessment with Restoration Assistance [11.983231834400698]
No-Reference Image Quality Assessment for distorted images has always been a challenging problem due to image content variance and distortion diversity.
Previous IQA models mostly encode explicit single-quality features of synthetic images to obtain quality-aware representations for quality score prediction.
We introduce the DRI method to obtain degradation vectors and quality vectors of images, which separately model the degradation and quality information of low-quality images.
arXiv Detail & Related papers (2024-11-26T12:48:47Z) - GenzIQA: Generalized Image Quality Assessment using Prompt-Guided Latent Diffusion Models [7.291687946822539]
A major drawback of state-of-the-art NR-IQA methods is their limited ability to generalize across diverse IQA settings.
Recent text-to-image generative models generate meaningful visual concepts with fine details related to text concepts.
In this work, we leverage the denoising process of such diffusion models for generalized IQA by understanding the degree of alignment between learnable quality-aware text prompts and images.
arXiv Detail & Related papers (2024-06-07T05:46:39Z) - Large Multi-modality Model Assisted AI-Generated Image Quality Assessment [53.182136445844904]
We introduce a large Multi-modality model Assisted AI-Generated Image Quality Assessment (MA-AGIQA) model.
It uses semantically informed guidance to sense semantic information and extract semantic vectors through carefully designed text prompts.
It achieves state-of-the-art performance, and demonstrates its superior generalization capabilities on assessing the quality of AI-generated images.
arXiv Detail & Related papers (2024-04-27T02:40:36Z) - Comparison of No-Reference Image Quality Models via MAP Estimation in
Diffusion Latents [99.19391983670569]
We show that NR-IQA models can be plugged into the maximum a posteriori (MAP) estimation framework for image enhancement.
Different NR-IQA models are likely to induce different enhanced images, which are ultimately subject to psychophysical testing.
This leads to a new computational method for comparing NR-IQA models within the analysis-by-synthesis framework.
arXiv Detail & Related papers (2024-03-11T03:35:41Z) - Transformer-based No-Reference Image Quality Assessment via Supervised
Contrastive Learning [36.695247860715874]
We propose a novel Contrastive Learning (SCL) and Transformer-based NR-IQA model SaTQA.
We first train a model on a large-scale synthetic dataset by SCL to extract degradation features of images with various distortion types and levels.
To further extract distortion information from images, we propose a backbone network incorporating the Multi-Stream Block (MSB) by combining the CNN inductive bias and Transformer long-term dependence modeling capability.
Experimental results on seven standard IQA datasets show that SaTQA outperforms the state-of-the-art methods for both synthetic and authentic datasets
arXiv Detail & Related papers (2023-12-12T06:01:41Z) - Diffusion Models for Image Restoration and Enhancement -- A
Comprehensive Survey [96.99328714941657]
We present a comprehensive review of recent diffusion model-based methods on image restoration.
We classify and emphasize the innovative designs using diffusion models for both IR and blind/real-world IR.
We propose five potential and challenging directions for the future research of diffusion model-based IR.
arXiv Detail & Related papers (2023-08-18T08:40:38Z) - LLDiffusion: Learning Degradation Representations in Diffusion Models
for Low-Light Image Enhancement [118.83316133601319]
Current deep learning methods for low-light image enhancement (LLIE) typically rely on pixel-wise mapping learned from paired data.
We propose a degradation-aware learning scheme for LLIE using diffusion models, which effectively integrates degradation and image priors into the diffusion process.
arXiv Detail & Related papers (2023-07-27T07:22:51Z) - Learning Transformer Features for Image Quality Assessment [53.51379676690971]
We propose a unified IQA framework that utilizes CNN backbone and transformer encoder to extract features.
The proposed framework is compatible with both FR and NR modes and allows for a joint training scheme.
arXiv Detail & Related papers (2021-12-01T13:23:00Z) - High-Frequency aware Perceptual Image Enhancement [0.08460698440162888]
We introduce a novel deep neural network suitable for multi-scale analysis and propose efficient model-agnostic methods.
Our model can be applied to multi-scale image enhancement problems including denoising, deblurring and single image super-resolution.
arXiv Detail & Related papers (2021-05-25T07:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.