PIPAL: a Large-Scale Image Quality Assessment Dataset for Perceptual
Image Restoration
- URL: http://arxiv.org/abs/2007.12142v2
- Date: Sat, 26 Sep 2020 08:30:28 GMT
- Title: PIPAL: a Large-Scale Image Quality Assessment Dataset for Perceptual
Image Restoration
- Authors: Jinjin Gu, Haoming Cai, Haoyu Chen, Xiaoxing Ye, Jimmy Ren, Chao Dong
- Abstract summary: Image quality assessment (IQA) is the key factor for the fast development of image restoration (IR) algorithms.
Recent IR methods based on Generative Adversarial Networks (GANs) have achieved significant improvement in visual performance.
We present new benchmarks for both IQA and super-resolution methods.
- Score: 28.154286553282486
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image quality assessment (IQA) is the key factor for the fast development of
image restoration (IR) algorithms. The most recent IR methods based on
Generative Adversarial Networks (GANs) have achieved significant improvement in
visual performance, but also presented great challenges for quantitative
evaluation. Notably, we observe an increasing inconsistency between perceptual
quality and the evaluation results. Then we raise two questions: (1) Can
existing IQA methods objectively evaluate recent IR algorithms? (2) When focus
on beating current benchmarks, are we getting better IR algorithms? To answer
these questions and promote the development of IQA methods, we contribute a
large-scale IQA dataset, called Perceptual Image Processing Algorithms (PIPAL)
dataset. Especially, this dataset includes the results of GAN-based methods,
which are missing in previous datasets. We collect more than 1.13 million human
judgments to assign subjective scores for PIPAL images using the more reliable
"Elo system". Based on PIPAL, we present new benchmarks for both IQA and
super-resolution methods. Our results indicate that existing IQA methods cannot
fairly evaluate GAN-based IR algorithms. While using appropriate evaluation
methods is important, IQA methods should also be updated along with the
development of IR algorithms. At last, we improve the performance of IQA
networks on GAN-based distortions by introducing anti-aliasing pooling.
Experiments show the effectiveness of the proposed method.
Related papers
- Image-Guided Outdoor LiDAR Perception Quality Assessment for Autonomous Driving [107.68311433435422]
We introduce a novel image-guided point cloud quality assessment algorithm for outdoor autonomous driving environments.
The IGO-PQA generation algorithm generates an overall quality score for a singleframe LiDAR-based point cloud.
The second component is a transformer-based IGO-PQA regression algorithm for no-reference outdoor point cloud quality assessment.
arXiv Detail & Related papers (2024-06-25T04:16:14Z) - Improving Interpretability and Robustness for the Detection of AI-Generated Images [6.116075037154215]
We analyze existing state-of-the-art AIGI detection methods based on frozen CLIP embeddings.
We show how to interpret them, shedding light on how images produced by various AI generators differ from real ones.
arXiv Detail & Related papers (2024-06-21T10:33:09Z) - DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [54.139923409101044]
Blind image quality assessment (IQA) in the wild presents significant challenges.
Given the difficulty in collecting large-scale training data, leveraging limited data to develop a model with strong generalization remains an open problem.
Motivated by the robust image perception capabilities of pre-trained text-to-image (T2I) diffusion models, we propose a novel IQA method, diffusion priors-based IQA.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - Multi-Modal Prompt Learning on Blind Image Quality Assessment [65.0676908930946]
Image Quality Assessment (IQA) models benefit significantly from semantic information, which allows them to treat different types of objects distinctly.
Traditional methods, hindered by a lack of sufficiently annotated data, have employed the CLIP image-text pretraining model as their backbone to gain semantic awareness.
Recent approaches have attempted to address this mismatch using prompt technology, but these solutions have shortcomings.
This paper introduces an innovative multi-modal prompt-based methodology for IQA.
arXiv Detail & Related papers (2024-04-23T11:45:32Z) - When No-Reference Image Quality Models Meet MAP Estimation in Diffusion Latents [92.45867913876691]
No-reference image quality assessment (NR-IQA) models can effectively quantify perceived image quality.
We show that NR-IQA models can be plugged into the maximum a posteriori (MAP) estimation framework for image enhancement.
arXiv Detail & Related papers (2024-03-11T03:35:41Z) - MSTRIQ: No Reference Image Quality Assessment Based on Swin Transformer
with Multi-Stage Fusion [8.338999282303755]
We propose a novel algorithm based on the Swin Transformer.
It aggregates information from both local and global features to better predict the quality.
It ranks 2nd in the no-reference track of NTIRE 2022 Perceptual Image Quality Assessment Challenge.
arXiv Detail & Related papers (2022-05-20T11:34:35Z) - Attentions Help CNNs See Better: Attention-based Hybrid Image Quality
Assessment Network [20.835800149919145]
Image quality assessment (IQA) algorithm aims to quantify the human perception of image quality.
There is a performance drop when assessing distortion images generated by generative adversarial network (GAN) with seemingly realistic texture.
We propose an Attention-based Hybrid Image Quality Assessment Network (AHIQ) to deal with the challenge and get better performance on the GAN-based IQA task.
arXiv Detail & Related papers (2022-04-22T03:59:18Z) - Confusing Image Quality Assessment: Towards Better Augmented Reality
Experience [96.29124666702566]
We consider AR technology as the superimposition of virtual scenes and real scenes, and introduce visual confusion as its basic theory.
A ConFusing Image Quality Assessment (CFIQA) database is established, which includes 600 reference images and 300 distorted images generated by mixing reference images in pairs.
An objective metric termed CFIQA is also proposed to better evaluate the confusing image quality.
arXiv Detail & Related papers (2022-04-11T07:03:06Z) - Learning Transformer Features for Image Quality Assessment [53.51379676690971]
We propose a unified IQA framework that utilizes CNN backbone and transformer encoder to extract features.
The proposed framework is compatible with both FR and NR modes and allows for a joint training scheme.
arXiv Detail & Related papers (2021-12-01T13:23:00Z) - Image Quality Assessment for Perceptual Image Restoration: A New
Dataset, Benchmark and Metric [19.855042248822738]
Image quality assessment (IQA) is the key factor for the fast development of image restoration (IR) algorithms.
Recent IR algorithms based on generative adversarial networks (GANs) have brought in significant improvement on visual performance.
We present two questions: Can existing IQA methods objectively evaluate recent IR algorithms?
arXiv Detail & Related papers (2020-11-30T17:06:46Z) - No-Reference Image Quality Assessment via Feature Fusion and Multi-Task
Learning [29.19484863898778]
Blind or no-reference image quality assessment (NR-IQA) is a fundamental, unsolved, and yet challenging problem.
We propose a simple and yet effective general-purpose no-reference (NR) image quality assessment framework based on multi-task learning.
Our model employs distortion types as well as subjective human scores to predict image quality.
arXiv Detail & Related papers (2020-06-06T05:04:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.