FROQ: Observing Face Recognition Models for Efficient Quality Assessment
- URL: http://arxiv.org/abs/2509.17689v1
- Date: Mon, 22 Sep 2025 12:29:44 GMT
- Title: FROQ: Observing Face Recognition Models for Efficient Quality Assessment
- Authors: Žiga Babnik, Deepak Kumar Jain, Peter Peer, Vitomir Štruc,
- Abstract summary: Face Recognition (FR) plays a crucial role in many critical (high-stakes) applications.<n>Most state-of-the-art FIQA techniques rely on extensive supervised training to achieve accurate quality estimation.<n>In this paper, we introduce FROQ, a semi-supervised, training-free approach to estimate face-image quality.
- Score: 5.154970335677459
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Face Recognition (FR) plays a crucial role in many critical (high-stakes) applications, where errors in the recognition process can lead to serious consequences. Face Image Quality Assessment (FIQA) techniques enhance FR systems by providing quality estimates of face samples, enabling the systems to discard samples that are unsuitable for reliable recognition or lead to low-confidence recognition decisions. Most state-of-the-art FIQA techniques rely on extensive supervised training to achieve accurate quality estimation. In contrast, unsupervised techniques eliminate the need for additional training but tend to be slower and typically exhibit lower performance. In this paper, we introduce FROQ (Face Recognition Observer of Quality), a semi-supervised, training-free approach that leverages specific intermediate representations within a given FR model to estimate face-image quality, and combines the efficiency of supervised FIQA models with the training-free approach of unsupervised methods. A simple calibration step based on pseudo-quality labels allows FROQ to uncover specific representations, useful for quality assessment, in any modern FR model. To generate these pseudo-labels, we propose a novel unsupervised FIQA technique based on sample perturbations. Comprehensive experiments with four state-of-the-art FR models and eight benchmark datasets show that FROQ leads to highly competitive results compared to the state-of-the-art, achieving both strong performance and efficient runtime, without requiring explicit training.
Related papers
- IQPFR: An Image Quality Prior for Blind Face Restoration and Beyond [56.99331967165238]
Blind Face Restoration (BFR) addresses the challenge of reconstructing degraded low-quality (LQ) facial images into high-quality (HQ) outputs.<n>We propose a novel framework that incorporates an Image Quality Prior (IQP) derived from No-Reference Image Quality Assessment (NR-IQA) models.<n>Our method outperforms state-of-the-art techniques across multiple benchmarks.
arXiv Detail & Related papers (2025-03-12T11:39:51Z) - DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [73.6767681305851]
Blind image quality assessment (IQA) in the wild presents significant challenges.<n>Given the difficulty in collecting large-scale training data, leveraging limited data to develop a model with strong generalization remains an open problem.<n>Motivated by the robust image perception capabilities of pre-trained text-to-image (T2I) diffusion models, we propose a novel IQA method, diffusion priors-based IQA.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - Multi-Modal Prompt Learning on Blind Image Quality Assessment [65.0676908930946]
Image Quality Assessment (IQA) models benefit significantly from semantic information, which allows them to treat different types of objects distinctly.
Traditional methods, hindered by a lack of sufficiently annotated data, have employed the CLIP image-text pretraining model as their backbone to gain semantic awareness.
Recent approaches have attempted to address this mismatch using prompt technology, but these solutions have shortcomings.
This paper introduces an innovative multi-modal prompt-based methodology for IQA.
arXiv Detail & Related papers (2024-04-23T11:45:32Z) - GraFIQs: Face Image Quality Assessment Using Gradient Magnitudes [9.170455788675836]
Face Image Quality Assessment (FIQA) estimates the utility of face images for automated face recognition (FR) systems.
We propose in this work a novel approach to assess the quality of face images based on inspecting the required changes in the pre-trained FR model weights.
arXiv Detail & Related papers (2024-04-18T14:07:08Z) - Contrastive Pre-Training with Multi-View Fusion for No-Reference Point Cloud Quality Assessment [49.36799270585947]
No-reference point cloud quality assessment (NR-PCQA) aims to automatically evaluate the perceptual quality of distorted point clouds without available reference.
We propose a novel contrastive pre-training framework tailored for PCQA (CoPA)
Our method outperforms the state-of-the-art PCQA methods on popular benchmarks.
arXiv Detail & Related papers (2024-03-15T07:16:07Z) - A Quality Aware Sample-to-Sample Comparison for Face Recognition [13.96448286983864]
This work integrates a quality-aware learning process at the sample level into the classification training paradigm (QAFace)
Our method adaptively finds and assigns more attention to the recognizable low-quality samples in the training datasets.
arXiv Detail & Related papers (2023-06-06T20:28:04Z) - Optimization-Based Improvement of Face Image Quality Assessment
Techniques [5.831942593046074]
Face image quality assessment (FIQA) techniques try to infer sample-quality information from the input face images that can aid with the recognition process.
We present in this paper a supervised quality-label optimization approach, aimed at improving the performance of existing FIQA techniques.
We evaluate the proposed approach in comprehensive experiments with six state-of-the-art FIQA approaches.
arXiv Detail & Related papers (2023-05-24T08:06:12Z) - DifFIQA: Face Image Quality Assessment Using Denoising Diffusion
Probabilistic Models [1.217503190366097]
Face image quality assessment (FIQA) techniques aim to mitigate these performance degradations.
We present a powerful new FIQA approach, named DifFIQA, which relies on denoising diffusion probabilistic models (DDPM)
Because the diffusion-based perturbations are computationally expensive, we also distill the knowledge encoded in DifFIQA into a regression-based quality predictor, called DifFIQA(R)
arXiv Detail & Related papers (2023-05-09T21:03:13Z) - Conformer and Blind Noisy Students for Improved Image Quality Assessment [80.57006406834466]
Learning-based approaches for perceptual image quality assessment (IQA) usually require both the distorted and reference image for measuring the perceptual quality accurately.
In this work, we explore the performance of transformer-based full-reference IQA models.
We also propose a method for IQA based on semi-supervised knowledge distillation from full-reference teacher models into blind student models.
arXiv Detail & Related papers (2022-04-27T10:21:08Z) - CR-FIQA: Face Image Quality Assessment by Learning Sample Relative
Classifiability [2.3624125155742055]
We propose a novel learning paradigm that learns internal network observations during the training process.
Our proposed CR-FIQA uses this paradigm to estimate the face image quality of a sample by predicting its relative classifiability.
We demonstrate the superiority of our proposed CR-FIQA over state-of-the-art (SOTA) FIQA algorithms.
arXiv Detail & Related papers (2021-12-13T12:18:43Z) - Learning Transformer Features for Image Quality Assessment [53.51379676690971]
We propose a unified IQA framework that utilizes CNN backbone and transformer encoder to extract features.
The proposed framework is compatible with both FR and NR modes and allows for a joint training scheme.
arXiv Detail & Related papers (2021-12-01T13:23:00Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.