Image Quality Assessment for Perceptual Image Restoration: A New
Dataset, Benchmark and Metric
- URL: http://arxiv.org/abs/2011.15002v1
- Date: Mon, 30 Nov 2020 17:06:46 GMT
- Title: Image Quality Assessment for Perceptual Image Restoration: A New
Dataset, Benchmark and Metric
- Authors: Jinjin Gu, Haoming Cai, Haoyu Chen, Xiaoxing Ye, Jimmy Ren, Chao Dong
- Abstract summary: Image quality assessment (IQA) is the key factor for the fast development of image restoration (IR) algorithms.
Recent IR algorithms based on generative adversarial networks (GANs) have brought in significant improvement on visual performance.
We present two questions: Can existing IQA methods objectively evaluate recent IR algorithms?
- Score: 19.855042248822738
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image quality assessment (IQA) is the key factor for the fast development of
image restoration (IR) algorithms. The most recent perceptual IR algorithms
based on generative adversarial networks (GANs) have brought in significant
improvement on visual performance, but also pose great challenges for
quantitative evaluation. Notably, we observe an increasing inconsistency
between perceptual quality and the evaluation results. We present two
questions: Can existing IQA methods objectively evaluate recent IR algorithms?
With the focus on beating current benchmarks, are we getting better IR
algorithms? To answer the questions and promote the development of IQA methods,
we contribute a large-scale IQA dataset, called Perceptual Image Processing
ALgorithms (PIPAL) dataset. Especially, this dataset includes the results of
GAN-based IR algorithms, which are missing in previous datasets. We collect
more than 1.13 million human judgments to assign subjective scores for PIPAL
images using the more reliable Elo system. Based on PIPAL, we present new
benchmarks for both IQA and SR methods. Our results indicate that existing IQA
methods cannot fairly evaluate GAN-based IR algorithms. While using appropriate
evaluation methods is important, IQA methods should also be updated along with
the development of IR algorithms. At last, we shed light on how to improve the
IQA performance on GAN-based distortion. Inspired by the find that the existing
IQA methods have an unsatisfactory performance on the GAN-based distortion
partially because of their low tolerance to spatial misalignment, we propose to
improve the performance of an IQA network on GAN-based distortion by explicitly
considering this misalignment. We propose the Space Warping Difference Network,
which includes the novel l_2 pooling layers and Space Warping Difference
layers. Experiments demonstrate the effectiveness of the proposed method.
Related papers
- Image-Guided Outdoor LiDAR Perception Quality Assessment for Autonomous Driving [107.68311433435422]
We introduce a novel image-guided point cloud quality assessment algorithm for outdoor autonomous driving environments.
The IGO-PQA generation algorithm generates an overall quality score for a singleframe LiDAR-based point cloud.
The second component is a transformer-based IGO-PQA regression algorithm for no-reference outdoor point cloud quality assessment.
arXiv Detail & Related papers (2024-06-25T04:16:14Z) - DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [54.139923409101044]
Blind image quality assessment (IQA) in the wild presents significant challenges.
Given the difficulty in collecting large-scale training data, leveraging limited data to develop a model with strong generalization remains an open problem.
Motivated by the robust image perception capabilities of pre-trained text-to-image (T2I) diffusion models, we propose a novel IQA method, diffusion priors-based IQA.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - Multi-Modal Prompt Learning on Blind Image Quality Assessment [65.0676908930946]
Image Quality Assessment (IQA) models benefit significantly from semantic information, which allows them to treat different types of objects distinctly.
Traditional methods, hindered by a lack of sufficiently annotated data, have employed the CLIP image-text pretraining model as their backbone to gain semantic awareness.
Recent approaches have attempted to address this mismatch using prompt technology, but these solutions have shortcomings.
This paper introduces an innovative multi-modal prompt-based methodology for IQA.
arXiv Detail & Related papers (2024-04-23T11:45:32Z) - MSTRIQ: No Reference Image Quality Assessment Based on Swin Transformer
with Multi-Stage Fusion [8.338999282303755]
We propose a novel algorithm based on the Swin Transformer.
It aggregates information from both local and global features to better predict the quality.
It ranks 2nd in the no-reference track of NTIRE 2022 Perceptual Image Quality Assessment Challenge.
arXiv Detail & Related papers (2022-05-20T11:34:35Z) - Attentions Help CNNs See Better: Attention-based Hybrid Image Quality
Assessment Network [20.835800149919145]
Image quality assessment (IQA) algorithm aims to quantify the human perception of image quality.
There is a performance drop when assessing distortion images generated by generative adversarial network (GAN) with seemingly realistic texture.
We propose an Attention-based Hybrid Image Quality Assessment Network (AHIQ) to deal with the challenge and get better performance on the GAN-based IQA task.
arXiv Detail & Related papers (2022-04-22T03:59:18Z) - Confusing Image Quality Assessment: Towards Better Augmented Reality
Experience [96.29124666702566]
We consider AR technology as the superimposition of virtual scenes and real scenes, and introduce visual confusion as its basic theory.
A ConFusing Image Quality Assessment (CFIQA) database is established, which includes 600 reference images and 300 distorted images generated by mixing reference images in pairs.
An objective metric termed CFIQA is also proposed to better evaluate the confusing image quality.
arXiv Detail & Related papers (2022-04-11T07:03:06Z) - Learning Transformer Features for Image Quality Assessment [53.51379676690971]
We propose a unified IQA framework that utilizes CNN backbone and transformer encoder to extract features.
The proposed framework is compatible with both FR and NR modes and allows for a joint training scheme.
arXiv Detail & Related papers (2021-12-01T13:23:00Z) - Region-Adaptive Deformable Network for Image Quality Assessment [16.03642709194366]
In image restoration and enhancement tasks, images generated by generative adversarial networks (GAN) can achieve better visual performance than traditional CNN-generated images.
We propose the reference-oriented deformable convolution, which can improve the performance of an IQA network on GAN-based distortion.
Experiment results on the NTIRE 2021 Perceptual Image Quality Assessment Challenge dataset show the superior performance of RADN.
arXiv Detail & Related papers (2021-04-23T13:47:20Z) - AP-Loss for Accurate One-Stage Object Detection [49.13608882885456]
One-stage object detectors are trained by optimizing classification-loss and localization-loss simultaneously.
The former suffers much from extreme foreground-background imbalance due to the large number of anchors.
This paper proposes a novel framework to replace the classification task in one-stage detectors with a ranking task.
arXiv Detail & Related papers (2020-08-17T13:22:01Z) - PIPAL: a Large-Scale Image Quality Assessment Dataset for Perceptual
Image Restoration [28.154286553282486]
Image quality assessment (IQA) is the key factor for the fast development of image restoration (IR) algorithms.
Recent IR methods based on Generative Adversarial Networks (GANs) have achieved significant improvement in visual performance.
We present new benchmarks for both IQA and super-resolution methods.
arXiv Detail & Related papers (2020-07-23T17:15:25Z) - No-Reference Image Quality Assessment via Feature Fusion and Multi-Task
Learning [29.19484863898778]
Blind or no-reference image quality assessment (NR-IQA) is a fundamental, unsolved, and yet challenging problem.
We propose a simple and yet effective general-purpose no-reference (NR) image quality assessment framework based on multi-task learning.
Our model employs distortion types as well as subjective human scores to predict image quality.
arXiv Detail & Related papers (2020-06-06T05:04:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.