ARNIQA: Learning Distortion Manifold for Image Quality Assessment
- URL: http://arxiv.org/abs/2310.14918v2
- Date: Sat, 4 Nov 2023 18:51:11 GMT
- Title: ARNIQA: Learning Distortion Manifold for Image Quality Assessment
- Authors: Lorenzo Agnolucci, Leonardo Galteri, Marco Bertini, Alberto Del Bimbo
- Abstract summary: No-Reference Image Quality Assessment (NR-IQA) aims to develop methods to measure image quality in alignment with human perception without the need for a high-quality reference image.
We propose a self-supervised approach named ARNIQA for modeling the image distortion manifold to obtain quality representations in an intrinsic manner.
- Score: 28.773037051085318
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: No-Reference Image Quality Assessment (NR-IQA) aims to develop methods to
measure image quality in alignment with human perception without the need for a
high-quality reference image. In this work, we propose a self-supervised
approach named ARNIQA (leArning distoRtion maNifold for Image Quality
Assessment) for modeling the image distortion manifold to obtain quality
representations in an intrinsic manner. First, we introduce an image
degradation model that randomly composes ordered sequences of consecutively
applied distortions. In this way, we can synthetically degrade images with a
large variety of degradation patterns. Second, we propose to train our model by
maximizing the similarity between the representations of patches of different
images distorted equally, despite varying content. Therefore, images degraded
in the same manner correspond to neighboring positions within the distortion
manifold. Finally, we map the image representations to the quality scores with
a simple linear regressor, thus without fine-tuning the encoder weights. The
experiments show that our approach achieves state-of-the-art performance on
several datasets. In addition, ARNIQA demonstrates improved data efficiency,
generalization capabilities, and robustness compared to competing methods. The
code and the model are publicly available at
https://github.com/miccunifi/ARNIQA.
Related papers
- Dual-Representation Interaction Driven Image Quality Assessment with Restoration Assistance [11.983231834400698]
No-Reference Image Quality Assessment for distorted images has always been a challenging problem due to image content variance and distortion diversity.
Previous IQA models mostly encode explicit single-quality features of synthetic images to obtain quality-aware representations for quality score prediction.
We introduce the DRI method to obtain degradation vectors and quality vectors of images, which separately model the degradation and quality information of low-quality images.
arXiv Detail & Related papers (2024-11-26T12:48:47Z) - ExIQA: Explainable Image Quality Assessment Using Distortion Attributes [0.3683202928838613]
We propose an explainable approach for distortion identification based on attribute learning.
We generate a dataset consisting of 100,000 images for efficient training.
Our approach achieves state-of-the-art (SOTA) performance across multiple datasets in both PLCC and SRCC metrics.
arXiv Detail & Related papers (2024-09-10T20:28:14Z) - Comparison of No-Reference Image Quality Models via MAP Estimation in
Diffusion Latents [99.19391983670569]
We show that NR-IQA models can be plugged into the maximum a posteriori (MAP) estimation framework for image enhancement.
Different NR-IQA models are likely to induce different enhanced images, which are ultimately subject to psychophysical testing.
This leads to a new computational method for comparing NR-IQA models within the analysis-by-synthesis framework.
arXiv Detail & Related papers (2024-03-11T03:35:41Z) - Image Quality Assessment: Learning to Rank Image Distortion Level [0.0]
We learn to compare the image quality of two registered images, with respect to a chosen distortion.
Our method takes advantage of the fact that at times, simulating image distortion and later evaluating its relative image quality, is easier than assessing its absolute value.
arXiv Detail & Related papers (2022-08-04T18:33:33Z) - Conformer and Blind Noisy Students for Improved Image Quality Assessment [80.57006406834466]
Learning-based approaches for perceptual image quality assessment (IQA) usually require both the distorted and reference image for measuring the perceptual quality accurately.
In this work, we explore the performance of transformer-based full-reference IQA models.
We also propose a method for IQA based on semi-supervised knowledge distillation from full-reference teacher models into blind student models.
arXiv Detail & Related papers (2022-04-27T10:21:08Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Learning Conditional Knowledge Distillation for Degraded-Reference Image
Quality Assessment [157.1292674649519]
We propose a practical solution named degraded-reference IQA (DR-IQA)
DR-IQA exploits the inputs of IR models, degraded images, as references.
Our results can even be close to the performance of full-reference settings.
arXiv Detail & Related papers (2021-08-18T02:35:08Z) - Perceptual Image Restoration with High-Quality Priori and Degradation
Learning [28.93489249639681]
We show that our model performs well in measuring the similarity between restored and degraded images.
Our simultaneous restoration and enhancement framework generalizes well to real-world complicated degradation types.
arXiv Detail & Related papers (2021-03-04T13:19:50Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.