DifFIQA: Face Image Quality Assessment Using Denoising Diffusion
Probabilistic Models
- URL: http://arxiv.org/abs/2305.05768v1
- Date: Tue, 9 May 2023 21:03:13 GMT
- Title: DifFIQA: Face Image Quality Assessment Using Denoising Diffusion
Probabilistic Models
- Authors: \v{Z}iga Babnik, Peter Peer, Vitomir \v{S}truc
- Abstract summary: Face image quality assessment (FIQA) techniques aim to mitigate these performance degradations.
We present a powerful new FIQA approach, named DifFIQA, which relies on denoising diffusion probabilistic models (DDPM)
Because the diffusion-based perturbations are computationally expensive, we also distill the knowledge encoded in DifFIQA into a regression-based quality predictor, called DifFIQA(R)
- Score: 1.217503190366097
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern face recognition (FR) models excel in constrained scenarios, but often
suffer from decreased performance when deployed in unconstrained (real-world)
environments due to uncertainties surrounding the quality of the captured
facial data. Face image quality assessment (FIQA) techniques aim to mitigate
these performance degradations by providing FR models with sample-quality
predictions that can be used to reject low-quality samples and reduce false
match errors. However, despite steady improvements, ensuring reliable quality
estimates across facial images with diverse characteristics remains
challenging. In this paper, we present a powerful new FIQA approach, named
DifFIQA, which relies on denoising diffusion probabilistic models (DDPM) and
ensures highly competitive results. The main idea behind the approach is to
utilize the forward and backward processes of DDPMs to perturb facial images
and quantify the impact of these perturbations on the corresponding image
embeddings for quality prediction. Because the diffusion-based perturbations
are computationally expensive, we also distill the knowledge encoded in DifFIQA
into a regression-based quality predictor, called DifFIQA(R), that balances
performance and execution time. We evaluate both models in comprehensive
experiments on 7 datasets, with 4 target FR models and against 10
state-of-the-art FIQA techniques with highly encouraging results. The source
code will be made publicly available.
Related papers
- GraFIQs: Face Image Quality Assessment Using Gradient Magnitudes [9.170455788675836]
Face Image Quality Assessment (FIQA) estimates the utility of face images for automated face recognition (FR) systems.
We propose in this work a novel approach to assess the quality of face images based on inspecting the required changes in the pre-trained FR model weights.
arXiv Detail & Related papers (2024-04-18T14:07:08Z) - Optimization-Based Improvement of Face Image Quality Assessment
Techniques [5.831942593046074]
Face image quality assessment (FIQA) techniques try to infer sample-quality information from the input face images that can aid with the recognition process.
We present in this paper a supervised quality-label optimization approach, aimed at improving the performance of existing FIQA techniques.
We evaluate the proposed approach in comprehensive experiments with six state-of-the-art FIQA approaches.
arXiv Detail & Related papers (2023-05-24T08:06:12Z) - Masked Images Are Counterfactual Samples for Robust Fine-tuning [77.82348472169335]
Fine-tuning deep learning models can lead to a trade-off between in-distribution (ID) performance and out-of-distribution (OOD) robustness.
We propose a novel fine-tuning method, which uses masked images as counterfactual samples that help improve the robustness of the fine-tuning model.
arXiv Detail & Related papers (2023-03-06T11:51:28Z) - FaceQAN: Face Image Quality Assessment Through Adversarial Noise
Exploration [1.217503190366097]
We propose a novel approach to face image quality assessment, called FaceQAN, that is based on adversarial examples.
As such, the proposed approach is the first to link image quality to adversarial attacks.
Experimental results show that FaceQAN achieves competitive results, while exhibiting several desirable characteristics.
arXiv Detail & Related papers (2022-12-05T09:37:32Z) - Conformer and Blind Noisy Students for Improved Image Quality Assessment [80.57006406834466]
Learning-based approaches for perceptual image quality assessment (IQA) usually require both the distorted and reference image for measuring the perceptual quality accurately.
In this work, we explore the performance of transformer-based full-reference IQA models.
We also propose a method for IQA based on semi-supervised knowledge distillation from full-reference teacher models into blind student models.
arXiv Detail & Related papers (2022-04-27T10:21:08Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Task-Specific Normalization for Continual Learning of Blind Image
Quality Models [105.03239956378465]
We present a simple yet effective continual learning method for blind image quality assessment (BIQA)
The key step in our approach is to freeze all convolution filters of a pre-trained deep neural network (DNN) for an explicit promise of stability.
We assign each new IQA dataset (i.e., task) a prediction head, and load the corresponding normalization parameters to produce a quality score.
The final quality estimate is computed by black a weighted summation of predictions from all heads with a lightweight $K$-means gating mechanism.
arXiv Detail & Related papers (2021-07-28T15:21:01Z) - Inducing Predictive Uncertainty Estimation for Face Recognition [102.58180557181643]
We propose a method for generating image quality training data automatically from'mated-pairs' of face images.
We use the generated data to train a lightweight Predictive Confidence Network, termed as PCNet, for estimating the confidence score of a face image.
arXiv Detail & Related papers (2020-09-01T17:52:00Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z) - A Heteroscedastic Uncertainty Model for Decoupling Sources of MRI Image
Quality [3.5480752735999417]
Quality control (QC) of medical images is essential to ensure that downstream analyses such as segmentation can be performed successfully.
We aim to automate the process by formulating a probabilistic network that estimates uncertainty through a heteroscedastic noise model.
We show models trained with simulated artefacts provide informative measures of uncertainty on real-world images and we validate our uncertainty predictions on problematic images identified by human-raters.
arXiv Detail & Related papers (2020-01-31T16:04:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.