QGFace: Quality-Guided Joint Training For Mixed-Quality Face Recognition
- URL: http://arxiv.org/abs/2312.17494v1
- Date: Fri, 29 Dec 2023 06:56:22 GMT
- Title: QGFace: Quality-Guided Joint Training For Mixed-Quality Face Recognition
- Authors: Youzhe Song and Feng Wang
- Abstract summary: We propose a novel quality-guided joint training approach for mixed-quality face recognition.
Based on quality partition, classification-based method is employed for HQ data learning.
For the LQ images which lack identity information, we learn them with self-supervised image-image contrastive learning.
- Score: 2.8519768339207356
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The quality of a face crop in an image is decided by many factors such as
camera resolution, distance, and illumination condition. This makes the
discrimination of face images with different qualities a challenging problem in
realistic applications. However, most existing approaches are designed
specifically for high-quality (HQ) or low-quality (LQ) images, and the
performances would degrade for the mixed-quality images. Besides, many methods
ask for pre-trained feature extractors or other auxiliary structures to support
the training and the evaluation. In this paper, we point out that the key to
better understand both the HQ and the LQ images simultaneously is to apply
different learning methods according to their qualities. We propose a novel
quality-guided joint training approach for mixed-quality face recognition,
which could simultaneously learn the images of different qualities with a
single encoder. Based on quality partition, classification-based method is
employed for HQ data learning. Meanwhile, for the LQ images which lack identity
information, we learn them with self-supervised image-image contrastive
learning. To effectively catch up the model update and improve the
discriminability of contrastive learning in our joint training scenario, we
further propose a proxy-updated real-time queue to compose the contrastive
pairs with features from the genuine encoder. Experiments on the low-quality
datasets SCface and Tinyface, the mixed-quality dataset IJB-B, and five
high-quality datasets demonstrate the effectiveness of our proposed approach in
recognizing face images of different qualities.
Related papers
- Rank-based No-reference Quality Assessment for Face Swapping [88.53827937914038]
The metric of measuring the quality in most face swapping methods relies on several distances between the manipulated images and the source image.
We present a novel no-reference image quality assessment (NR-IQA) method specifically designed for face swapping.
arXiv Detail & Related papers (2024-06-04T01:36:29Z) - Dual-Branch Network for Portrait Image Quality Assessment [76.27716058987251]
We introduce a dual-branch network for portrait image quality assessment (PIQA)
We utilize two backbone networks (textiti.e., Swin Transformer-B) to extract the quality-aware features from the entire portrait image and the facial image cropped from it.
We leverage LIQE, an image scene classification and quality assessment model, to capture the quality-aware and scene-specific features as the auxiliary features.
arXiv Detail & Related papers (2024-05-14T12:43:43Z) - Multi-Modal Prompt Learning on Blind Image Quality Assessment [65.0676908930946]
Image Quality Assessment (IQA) models benefit significantly from semantic information, which allows them to treat different types of objects distinctly.
Traditional methods, hindered by a lack of sufficiently annotated data, have employed the CLIP image-text pretraining model as their backbone to gain semantic awareness.
Recent approaches have attempted to address this mismatch using prompt technology, but these solutions have shortcomings.
This paper introduces an innovative multi-modal prompt-based methodology for IQA.
arXiv Detail & Related papers (2024-04-23T11:45:32Z) - Quality-Aware Image-Text Alignment for Real-World Image Quality Assessment [8.431867616409958]
No-Reference Image Quality Assessment (NR-IQA) focuses on designing methods to measure image quality in alignment with human perception when a high-quality reference image is unavailable.
The reliance on annotated Mean Opinion Scores (MOS) in the majority of state-of-the-art NR-IQA approaches limits their scalability and broader applicability to real-world scenarios.
We propose QualiCLIP, a CLIP-based self-supervised opinion-unaware method that does not require labeled MOS.
arXiv Detail & Related papers (2024-03-17T11:32:18Z) - Blind Image Quality Assessment via Vision-Language Correspondence: A
Multitask Learning Perspective [93.56647950778357]
Blind image quality assessment (BIQA) predicts the human perception of image quality without any reference information.
We develop a general and automated multitask learning scheme for BIQA to exploit auxiliary knowledge from other tasks.
arXiv Detail & Related papers (2023-03-27T07:58:09Z) - Quality-aware Pre-trained Models for Blind Image Quality Assessment [15.566552014530938]
Blind image quality assessment (BIQA) aims to automatically evaluate the perceived quality of a single image.
In this paper, we propose to solve the problem by a pretext task customized for BIQA in a self-supervised learning manner.
arXiv Detail & Related papers (2023-03-01T13:52:40Z) - AdaFace: Quality Adaptive Margin for Face Recognition [56.99208144386127]
We introduce another aspect of adaptiveness in the loss function, namely the image quality.
We propose a new loss function that emphasizes samples of different difficulties based on their image quality.
Our method, AdaFace, improves the face recognition performance over the state-of-the-art (SoTA) on four datasets.
arXiv Detail & Related papers (2022-04-03T01:23:41Z) - FaceQgen: Semi-Supervised Deep Learning for Face Image Quality
Assessment [19.928262020265965]
FaceQgen is a No-Reference Quality Assessment approach for face images.
It generates a scalar quality measure related with the face recognition accuracy.
It is trained from scratch using the SCface database.
arXiv Detail & Related papers (2022-01-03T17:22:38Z) - CR-FIQA: Face Image Quality Assessment by Learning Sample Relative
Classifiability [2.3624125155742055]
We propose a novel learning paradigm that learns internal network observations during the training process.
Our proposed CR-FIQA uses this paradigm to estimate the face image quality of a sample by predicting its relative classifiability.
We demonstrate the superiority of our proposed CR-FIQA over state-of-the-art (SOTA) FIQA algorithms.
arXiv Detail & Related papers (2021-12-13T12:18:43Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - No-Reference Image Quality Assessment via Feature Fusion and Multi-Task
Learning [29.19484863898778]
Blind or no-reference image quality assessment (NR-IQA) is a fundamental, unsolved, and yet challenging problem.
We propose a simple and yet effective general-purpose no-reference (NR) image quality assessment framework based on multi-task learning.
Our model employs distortion types as well as subjective human scores to predict image quality.
arXiv Detail & Related papers (2020-06-06T05:04:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.