Going the Extra Mile in Face Image Quality Assessment: A Novel Database
and Model
- URL: http://arxiv.org/abs/2207.04904v2
- Date: Sun, 30 Jul 2023 14:12:09 GMT
- Title: Going the Extra Mile in Face Image Quality Assessment: A Novel Database
and Model
- Authors: Shaolin Su, Hanhe Lin, Vlad Hosu, Oliver Wiedemann, Jinqiu Sun, Yu
Zhu, Hantao Liu, Yanning Zhang, Dietmar Saupe
- Abstract summary: We introduce the largest annotated IQA database developed to date, which contains 20,000 human faces.
We propose a novel deep learning model to accurately predict face image quality, which, for the first time, explores the use of generative priors for IQA.
- Score: 42.05084438912876
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An accurate computational model for image quality assessment (IQA) benefits
many vision applications, such as image filtering, image processing, and image
generation. Although the study of face images is an important subfield in
computer vision research, the lack of face IQA data and models limits the
precision of current IQA metrics on face image processing tasks such as face
superresolution, face enhancement, and face editing. To narrow this gap, in
this paper, we first introduce the largest annotated IQA database developed to
date, which contains 20,000 human faces -- an order of magnitude larger than
all existing rated datasets of faces -- of diverse individuals in highly varied
circumstances. Based on the database, we further propose a novel deep learning
model to accurately predict face image quality, which, for the first time,
explores the use of generative priors for IQA. By taking advantage of rich
statistics encoded in well pretrained off-the-shelf generative models, we
obtain generative prior information and use it as latent references to
facilitate blind IQA. The experimental results demonstrate both the value of
the proposed dataset for face IQA and the superior performance of the proposed
model.
Related papers
- Rank-based No-reference Quality Assessment for Face Swapping [88.53827937914038]
The metric of measuring the quality in most face swapping methods relies on several distances between the manipulated images and the source image.
We present a novel no-reference image quality assessment (NR-IQA) method specifically designed for face swapping.
arXiv Detail & Related papers (2024-06-04T01:36:29Z) - DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [54.139923409101044]
We propose a novel IQA method called diffusion priors-based IQA (DP-IQA)
We use pre-trained stable diffusion as the backbone, extract multi-level features from the denoising U-Net, and decode them to estimate the image quality score.
We distill the knowledge in the above model into a CNN-based student model, significantly reducing the parameter to enhance applicability.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - Descriptive Image Quality Assessment in the Wild [25.503311093471076]
VLM-based Image Quality Assessment (IQA) seeks to describe image quality linguistically to align with human expression.
We introduce Depicted image Quality Assessment in the Wild (DepictQA-Wild)
Our method includes a multi-functional IQA task paradigm that encompasses both assessment and comparison tasks, brief and detailed responses, full-reference and non-reference scenarios.
arXiv Detail & Related papers (2024-05-29T07:49:15Z) - Understanding and Evaluating Human Preferences for AI Generated Images with Instruction Tuning [58.41087653543607]
We first establish a novel Image Quality Assessment (IQA) database for AIGIs, termed AIGCIQA2023+.
This paper presents a MINT-IQA model to evaluate and explain human preferences for AIGIs from Multi-perspectives with INstruction Tuning.
arXiv Detail & Related papers (2024-05-12T17:45:11Z) - FaceQgen: Semi-Supervised Deep Learning for Face Image Quality
Assessment [19.928262020265965]
FaceQgen is a No-Reference Quality Assessment approach for face images.
It generates a scalar quality measure related with the face recognition accuracy.
It is trained from scratch using the SCface database.
arXiv Detail & Related papers (2022-01-03T17:22:38Z) - Continual Learning for Blind Image Quality Assessment [80.55119990128419]
Blind image quality assessment (BIQA) models fail to continually adapt to subpopulation shift.
Recent work suggests training BIQA methods on the combination of all available human-rated IQA datasets.
We formulate continual learning for BIQA, where a model learns continually from a stream of IQA datasets.
arXiv Detail & Related papers (2021-02-19T03:07:01Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z) - Comparison of Image Quality Models for Optimization of Image Processing
Systems [41.57409136781606]
We use eleven full-reference IQA models to train deep neural networks for four low-level vision tasks.
Subjective testing on the optimized images allows us to rank the competing models in terms of their perceptual performance.
arXiv Detail & Related papers (2020-05-04T09:26:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.