A Quality Aware Sample-to-Sample Comparison for Face Recognition
- URL: http://arxiv.org/abs/2306.04000v1
- Date: Tue, 6 Jun 2023 20:28:04 GMT
- Title: A Quality Aware Sample-to-Sample Comparison for Face Recognition
- Authors: Mohammad Saeed Ebrahimi Saadabadi, Sahar Rahimi Malakshan, Ali Zafari,
Moktari Mostofa, Nasser M. Nasrabadi
- Abstract summary: This work integrates a quality-aware learning process at the sample level into the classification training paradigm (QAFace)
Our method adaptively finds and assigns more attention to the recognizable low-quality samples in the training datasets.
- Score: 13.96448286983864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Currently available face datasets mainly consist of a large number of
high-quality and a small number of low-quality samples. As a result, a Face
Recognition (FR) network fails to learn the distribution of low-quality samples
since they are less frequent during training (underrepresented). Moreover,
current state-of-the-art FR training paradigms are based on the
sample-to-center comparison (i.e., Softmax-based classifier), which results in
a lack of uniformity between train and test metrics. This work integrates a
quality-aware learning process at the sample level into the classification
training paradigm (QAFace). In this regard, Softmax centers are adaptively
guided to pay more attention to low-quality samples by using a quality-aware
function. Accordingly, QAFace adds a quality-based adjustment to the updating
procedure of the Softmax-based classifier to improve the performance on the
underrepresented low-quality samples. Our method adaptively finds and assigns
more attention to the recognizable low-quality samples in the training
datasets. In addition, QAFace ignores the unrecognizable low-quality samples
using the feature magnitude as a proxy for quality. As a result, QAFace
prevents class centers from getting distracted from the optimal direction. The
proposed method is superior to the state-of-the-art algorithms in extensive
experimental results on the CFP-FP, LFW, CPLFW, CALFW, AgeDB, IJB-B, and IJB-C
datasets.
Related papers
- Adaptive Image Quality Assessment via Teaching Large Multimodal Model to Compare [99.57567498494448]
We introduce Compare2Score, an all-around LMM-based no-reference IQA model.
During training, we generate scaled-up comparative instructions by comparing images from the same IQA dataset.
Experiments on nine IQA datasets validate that the Compare2Score effectively bridges text-defined comparative levels during training.
arXiv Detail & Related papers (2024-05-29T17:26:09Z) - Mashee at SemEval-2024 Task 8: The Impact of Samples Quality on the Performance of In-Context Learning for Machine Text Classification [0.0]
We employ the chi-square test to identify high-quality samples and compare the results with those obtained using low-quality samples.
Our findings demonstrate that utilizing high-quality samples leads to improved performance with respect to all evaluated metrics.
arXiv Detail & Related papers (2024-05-28T12:47:43Z) - GraFIQs: Face Image Quality Assessment Using Gradient Magnitudes [9.170455788675836]
Face Image Quality Assessment (FIQA) estimates the utility of face images for automated face recognition (FR) systems.
We propose in this work a novel approach to assess the quality of face images based on inspecting the required changes in the pre-trained FR model weights.
arXiv Detail & Related papers (2024-04-18T14:07:08Z) - Contrastive Pre-Training with Multi-View Fusion for No-Reference Point Cloud Quality Assessment [49.36799270585947]
No-reference point cloud quality assessment (NR-PCQA) aims to automatically evaluate the perceptual quality of distorted point clouds without available reference.
We propose a novel contrastive pre-training framework tailored for PCQA (CoPA)
Our method outperforms the state-of-the-art PCQA methods on popular benchmarks.
arXiv Detail & Related papers (2024-03-15T07:16:07Z) - Deep Boosting Multi-Modal Ensemble Face Recognition with Sample-Level
Weighting [11.39204323420108]
Deep convolutional neural networks have achieved remarkable success in face recognition.
The current training benchmarks exhibit an imbalanced quality distribution.
This poses issues for generalization on hard samples since they are underrepresented during training.
Inspired by the well-known AdaBoost, we propose a sample-level weighting approach to incorporate the importance of different samples into the FR loss.
arXiv Detail & Related papers (2023-08-18T01:44:54Z) - Test Time Adaptation for Blind Image Quality Assessment [20.50795362928567]
We introduce two novel quality-relevant auxiliary tasks at the batch and sample levels to enable TTA for blind IQA.
Our experiments reveal that even using a small batch of images from the test distribution helps achieve significant improvement in performance.
arXiv Detail & Related papers (2023-07-27T09:43:06Z) - Towards Automated Imbalanced Learning with Deep Hierarchical
Reinforcement Learning [57.163525407022966]
Imbalanced learning is a fundamental challenge in data mining, where there is a disproportionate ratio of training samples in each class.
Over-sampling is an effective technique to tackle imbalanced learning through generating synthetic samples for the minority class.
We propose AutoSMOTE, an automated over-sampling algorithm that can jointly optimize different levels of decisions.
arXiv Detail & Related papers (2022-08-26T04:28:01Z) - AdaFace: Quality Adaptive Margin for Face Recognition [56.99208144386127]
We introduce another aspect of adaptiveness in the loss function, namely the image quality.
We propose a new loss function that emphasizes samples of different difficulties based on their image quality.
Our method, AdaFace, improves the face recognition performance over the state-of-the-art (SoTA) on four datasets.
arXiv Detail & Related papers (2022-04-03T01:23:41Z) - CR-FIQA: Face Image Quality Assessment by Learning Sample Relative
Classifiability [2.3624125155742055]
We propose a novel learning paradigm that learns internal network observations during the training process.
Our proposed CR-FIQA uses this paradigm to estimate the face image quality of a sample by predicting its relative classifiability.
We demonstrate the superiority of our proposed CR-FIQA over state-of-the-art (SOTA) FIQA algorithms.
arXiv Detail & Related papers (2021-12-13T12:18:43Z) - Learning Transformer Features for Image Quality Assessment [53.51379676690971]
We propose a unified IQA framework that utilizes CNN backbone and transformer encoder to extract features.
The proposed framework is compatible with both FR and NR modes and allows for a joint training scheme.
arXiv Detail & Related papers (2021-12-01T13:23:00Z) - Task-Specific Normalization for Continual Learning of Blind Image
Quality Models [105.03239956378465]
We present a simple yet effective continual learning method for blind image quality assessment (BIQA)
The key step in our approach is to freeze all convolution filters of a pre-trained deep neural network (DNN) for an explicit promise of stability.
We assign each new IQA dataset (i.e., task) a prediction head, and load the corresponding normalization parameters to produce a quality score.
The final quality estimate is computed by black a weighted summation of predictions from all heads with a lightweight $K$-means gating mechanism.
arXiv Detail & Related papers (2021-07-28T15:21:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.