GraFIQs: Face Image Quality Assessment Using Gradient Magnitudes
- URL: http://arxiv.org/abs/2404.12203v1
- Date: Thu, 18 Apr 2024 14:07:08 GMT
- Title: GraFIQs: Face Image Quality Assessment Using Gradient Magnitudes
- Authors: Jan Niklas Kolf, Naser Damer, Fadi Boutros,
- Abstract summary: Face Image Quality Assessment (FIQA) estimates the utility of face images for automated face recognition (FR) systems.
We propose in this work a novel approach to assess the quality of face images based on inspecting the required changes in the pre-trained FR model weights.
- Score: 9.170455788675836
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Face Image Quality Assessment (FIQA) estimates the utility of face images for automated face recognition (FR) systems. We propose in this work a novel approach to assess the quality of face images based on inspecting the required changes in the pre-trained FR model weights to minimize differences between testing samples and the distribution of the FR training dataset. To achieve that, we propose quantifying the discrepancy in Batch Normalization statistics (BNS), including mean and variance, between those recorded during FR training and those obtained by processing testing samples through the pretrained FR model. We then generate gradient magnitudes of pretrained FR weights by backpropagating the BNS through the pretrained model. The cumulative absolute sum of these gradient magnitudes serves as the FIQ for our approach. Through comprehensive experimentation, we demonstrate the effectiveness of our training-free and quality labeling-free approach, achieving competitive performance to recent state-of-theart FIQA approaches without relying on quality labeling, the need to train regression networks, specialized architectures, or designing and optimizing specific loss functions.
Related papers
- Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics [54.08757792080732]
We propose integrating deep features from pre-trained visual models with a statistical analysis model to achieve opinion-unaware BIQA (OU-BIQA)
Our proposed model exhibits superior consistency with human visual perception compared to state-of-the-art BIQA models.
arXiv Detail & Related papers (2024-05-29T06:09:34Z) - Contrastive Pre-Training with Multi-View Fusion for No-Reference Point Cloud Quality Assessment [49.36799270585947]
No-reference point cloud quality assessment (NR-PCQA) aims to automatically evaluate the perceptual quality of distorted point clouds without available reference.
We propose a novel contrastive pre-training framework tailored for PCQA (CoPA)
Our method outperforms the state-of-the-art PCQA methods on popular benchmarks.
arXiv Detail & Related papers (2024-03-15T07:16:07Z) - IG-FIQA: Improving Face Image Quality Assessment through Intra-class
Variance Guidance robust to Inaccurate Pseudo-Labels [13.567049202308981]
We present IG-FIQA, a novel approach to guide FIQA training, introducing a weight parameter to alleviate the adverse impact of these classes.
On various benchmark datasets, our proposed method, IG-FIQA, achieved novel state-of-the-art (SOTA) performance.
arXiv Detail & Related papers (2024-03-13T05:15:43Z) - Test Time Adaptation for Blind Image Quality Assessment [20.50795362928567]
We introduce two novel quality-relevant auxiliary tasks at the batch and sample levels to enable TTA for blind IQA.
Our experiments reveal that even using a small batch of images from the test distribution helps achieve significant improvement in performance.
arXiv Detail & Related papers (2023-07-27T09:43:06Z) - A Quality Aware Sample-to-Sample Comparison for Face Recognition [13.96448286983864]
This work integrates a quality-aware learning process at the sample level into the classification training paradigm (QAFace)
Our method adaptively finds and assigns more attention to the recognizable low-quality samples in the training datasets.
arXiv Detail & Related papers (2023-06-06T20:28:04Z) - DifFIQA: Face Image Quality Assessment Using Denoising Diffusion
Probabilistic Models [1.217503190366097]
Face image quality assessment (FIQA) techniques aim to mitigate these performance degradations.
We present a powerful new FIQA approach, named DifFIQA, which relies on denoising diffusion probabilistic models (DDPM)
Because the diffusion-based perturbations are computationally expensive, we also distill the knowledge encoded in DifFIQA into a regression-based quality predictor, called DifFIQA(R)
arXiv Detail & Related papers (2023-05-09T21:03:13Z) - Conformer and Blind Noisy Students for Improved Image Quality Assessment [80.57006406834466]
Learning-based approaches for perceptual image quality assessment (IQA) usually require both the distorted and reference image for measuring the perceptual quality accurately.
In this work, we explore the performance of transformer-based full-reference IQA models.
We also propose a method for IQA based on semi-supervised knowledge distillation from full-reference teacher models into blind student models.
arXiv Detail & Related papers (2022-04-27T10:21:08Z) - CR-FIQA: Face Image Quality Assessment by Learning Sample Relative
Classifiability [2.3624125155742055]
We propose a novel learning paradigm that learns internal network observations during the training process.
Our proposed CR-FIQA uses this paradigm to estimate the face image quality of a sample by predicting its relative classifiability.
We demonstrate the superiority of our proposed CR-FIQA over state-of-the-art (SOTA) FIQA algorithms.
arXiv Detail & Related papers (2021-12-13T12:18:43Z) - Learning Transformer Features for Image Quality Assessment [53.51379676690971]
We propose a unified IQA framework that utilizes CNN backbone and transformer encoder to extract features.
The proposed framework is compatible with both FR and NR modes and allows for a joint training scheme.
arXiv Detail & Related papers (2021-12-01T13:23:00Z) - Task-Specific Normalization for Continual Learning of Blind Image
Quality Models [105.03239956378465]
We present a simple yet effective continual learning method for blind image quality assessment (BIQA)
The key step in our approach is to freeze all convolution filters of a pre-trained deep neural network (DNN) for an explicit promise of stability.
We assign each new IQA dataset (i.e., task) a prediction head, and load the corresponding normalization parameters to produce a quality score.
The final quality estimate is computed by black a weighted summation of predictions from all heads with a lightweight $K$-means gating mechanism.
arXiv Detail & Related papers (2021-07-28T15:21:01Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.