CR-FIQA: Face Image Quality Assessment by Learning Sample Relative
Classifiability
- URL: http://arxiv.org/abs/2112.06592v1
- Date: Mon, 13 Dec 2021 12:18:43 GMT
- Title: CR-FIQA: Face Image Quality Assessment by Learning Sample Relative
Classifiability
- Authors: Fadi Boutros, Meiling Fang, Marcel Klemt, Biying Fu, Naser Damer
- Abstract summary: We propose a novel learning paradigm that learns internal network observations during the training process.
Our proposed CR-FIQA uses this paradigm to estimate the face image quality of a sample by predicting its relative classifiability.
We demonstrate the superiority of our proposed CR-FIQA over state-of-the-art (SOTA) FIQA algorithms.
- Score: 2.3624125155742055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The quality of face images significantly influences the performance of
underlying face recognition algorithms. Face image quality assessment (FIQA)
estimates the utility of the captured image in achieving reliable and accurate
recognition performance. In this work, we propose a novel learning paradigm
that learns internal network observations during the training process. Based on
that, our proposed CR-FIQA uses this paradigm to estimate the face image
quality of a sample by predicting its relative classifiability. This
classifiability is measured based on the allocation of the training sample
feature representation in angular space with respect to its class center and
the nearest negative class center. We experimentally illustrate the correlation
between the face image quality and the sample relative classifiability. As such
property is only observable for the training dataset, we propose to learn this
property from the training dataset and utilize it to predict the quality
measure on unseen samples. This training is performed simultaneously while
optimizing the class centers by an angular margin penalty-based softmax loss
used for face recognition model training. Through extensive evaluation
experiments on eight benchmarks and four face recognition models, we
demonstrate the superiority of our proposed CR-FIQA over state-of-the-art
(SOTA) FIQA algorithms.
Related papers
- Rank-based No-reference Quality Assessment for Face Swapping [88.53827937914038]
The metric of measuring the quality in most face swapping methods relies on several distances between the manipulated images and the source image.
We present a novel no-reference image quality assessment (NR-IQA) method specifically designed for face swapping.
arXiv Detail & Related papers (2024-06-04T01:36:29Z) - GraFIQs: Face Image Quality Assessment Using Gradient Magnitudes [9.170455788675836]
Face Image Quality Assessment (FIQA) estimates the utility of face images for automated face recognition (FR) systems.
We propose in this work a novel approach to assess the quality of face images based on inspecting the required changes in the pre-trained FR model weights.
arXiv Detail & Related papers (2024-04-18T14:07:08Z) - Contrastive Pre-Training with Multi-View Fusion for No-Reference Point Cloud Quality Assessment [49.36799270585947]
No-reference point cloud quality assessment (NR-PCQA) aims to automatically evaluate the perceptual quality of distorted point clouds without available reference.
We propose a novel contrastive pre-training framework tailored for PCQA (CoPA)
Our method outperforms the state-of-the-art PCQA methods on popular benchmarks.
arXiv Detail & Related papers (2024-03-15T07:16:07Z) - QGFace: Quality-Guided Joint Training For Mixed-Quality Face Recognition [2.8519768339207356]
We propose a novel quality-guided joint training approach for mixed-quality face recognition.
Based on quality partition, classification-based method is employed for HQ data learning.
For the LQ images which lack identity information, we learn them with self-supervised image-image contrastive learning.
arXiv Detail & Related papers (2023-12-29T06:56:22Z) - A Quality Aware Sample-to-Sample Comparison for Face Recognition [13.96448286983864]
This work integrates a quality-aware learning process at the sample level into the classification training paradigm (QAFace)
Our method adaptively finds and assigns more attention to the recognizable low-quality samples in the training datasets.
arXiv Detail & Related papers (2023-06-06T20:28:04Z) - CONVIQT: Contrastive Video Quality Estimator [63.749184706461826]
Perceptual video quality assessment (VQA) is an integral component of many streaming and video sharing platforms.
Here we consider the problem of learning perceptually relevant video quality representations in a self-supervised manner.
Our results indicate that compelling representations with perceptual bearing can be obtained using self-supervised learning.
arXiv Detail & Related papers (2022-06-29T15:22:01Z) - Conformer and Blind Noisy Students for Improved Image Quality Assessment [80.57006406834466]
Learning-based approaches for perceptual image quality assessment (IQA) usually require both the distorted and reference image for measuring the perceptual quality accurately.
In this work, we explore the performance of transformer-based full-reference IQA models.
We also propose a method for IQA based on semi-supervised knowledge distillation from full-reference teacher models into blind student models.
arXiv Detail & Related papers (2022-04-27T10:21:08Z) - FaceQgen: Semi-Supervised Deep Learning for Face Image Quality
Assessment [19.928262020265965]
FaceQgen is a No-Reference Quality Assessment approach for face images.
It generates a scalar quality measure related with the face recognition accuracy.
It is trained from scratch using the SCface database.
arXiv Detail & Related papers (2022-01-03T17:22:38Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - SDD-FIQA: Unsupervised Face Image Quality Assessment with Similarity
Distribution Distance [25.109321001368496]
Face Image Quality Assessment (FIQA) has become an indispensable part of the face recognition system.
We propose a novel unsupervised FIQA method that incorporates Similarity Distribution Distance for Face Image Quality Assessment (SDD-FIQA)
Our method generates quality pseudo-labels by calculating the Wasserstein Distance between the intra-class similarity distributions and inter-class similarity distributions.
arXiv Detail & Related papers (2021-03-10T10:23:28Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.