Cross-Quality LFW: A Database for Analyzing Cross-Resolution Image Face
Recognition in Unconstrained Environments
- URL: http://arxiv.org/abs/2108.10290v2
- Date: Thu, 26 Aug 2021 08:05:36 GMT
- Title: Cross-Quality LFW: A Database for Analyzing Cross-Resolution Image Face
Recognition in Unconstrained Environments
- Authors: Martin Knoche, Stefan H\"ormann, Gerhard Rigoll
- Abstract summary: Real-world face recognition applications often deal with suboptimal image quality or resolution due to different capturing conditions.
Recent cross-resolution face recognition approaches used simple, arbitrary, and unrealistic down- and up-scaling techniques to measure distances against real-world edge-cases in image quality.
We propose a new standardized benchmark dataset and evaluation protocol derived from the famous Labeled Faces in the Wild.
- Score: 8.368543987898732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-world face recognition applications often deal with suboptimal image
quality or resolution due to different capturing conditions such as various
subject-to-camera distances, poor camera settings, or motion blur. This
characteristic has an unignorable effect on performance. Recent
cross-resolution face recognition approaches used simple, arbitrary, and
unrealistic down- and up-scaling techniques to measure robustness against
real-world edge-cases in image quality. Thus, we propose a new standardized
benchmark dataset and evaluation protocol derived from the famous Labeled Faces
in the Wild (LFW). In contrast to previous derivatives, which focus on pose,
age, similarity, and adversarial attacks, our Cross-Quality Labeled Faces in
the Wild (XQLFW) maximizes the quality difference. It contains only more
realistic synthetically degraded images when necessary. Our proposed dataset is
then used to further investigate the influence of image quality on several
state-of-the-art approaches. With XQLFW, we show that these models perform
differently in cross-quality cases, and hence, the generalizing capability is
not accurately predicted by their performance on LFW. Additionally, we report
baseline accuracy with recent deep learning models explicitly trained for
cross-resolution applications and evaluate the susceptibility to image quality.
To encourage further research in cross-resolution face recognition and incite
the assessment of image quality robustness, we publish the database and code
for evaluation.
Related papers
- Rank-based No-reference Quality Assessment for Face Swapping [88.53827937914038]
The metric of measuring the quality in most face swapping methods relies on several distances between the manipulated images and the source image.
We present a novel no-reference image quality assessment (NR-IQA) method specifically designed for face swapping.
arXiv Detail & Related papers (2024-06-04T01:36:29Z) - Descriptive Image Quality Assessment in the Wild [25.503311093471076]
VLM-based Image Quality Assessment (IQA) seeks to describe image quality linguistically to align with human expression.
We introduce Depicted image Quality Assessment in the Wild (DepictQA-Wild)
Our method includes a multi-functional IQA task paradigm that encompasses both assessment and comparison tasks, brief and detailed responses, full-reference and non-reference scenarios.
arXiv Detail & Related papers (2024-05-29T07:49:15Z) - Dual-Branch Network for Portrait Image Quality Assessment [76.27716058987251]
We introduce a dual-branch network for portrait image quality assessment (PIQA)
We utilize two backbone networks (textiti.e., Swin Transformer-B) to extract the quality-aware features from the entire portrait image and the facial image cropped from it.
We leverage LIQE, an image scene classification and quality assessment model, to capture the quality-aware and scene-specific features as the auxiliary features.
arXiv Detail & Related papers (2024-05-14T12:43:43Z) - DeepFidelity: Perceptual Forgery Fidelity Assessment for Deepfake
Detection [67.3143177137102]
Deepfake detection refers to detecting artificially generated or edited faces in images or videos.
We propose a novel Deepfake detection framework named DeepFidelity to adaptively distinguish real and fake faces.
arXiv Detail & Related papers (2023-12-07T07:19:45Z) - FaceQAN: Face Image Quality Assessment Through Adversarial Noise
Exploration [1.217503190366097]
We propose a novel approach to face image quality assessment, called FaceQAN, that is based on adversarial examples.
As such, the proposed approach is the first to link image quality to adversarial attacks.
Experimental results show that FaceQAN achieves competitive results, while exhibiting several desirable characteristics.
arXiv Detail & Related papers (2022-12-05T09:37:32Z) - AdaFace: Quality Adaptive Margin for Face Recognition [56.99208144386127]
We introduce another aspect of adaptiveness in the loss function, namely the image quality.
We propose a new loss function that emphasizes samples of different difficulties based on their image quality.
Our method, AdaFace, improves the face recognition performance over the state-of-the-art (SoTA) on four datasets.
arXiv Detail & Related papers (2022-04-03T01:23:41Z) - QMagFace: Simple and Accurate Quality-Aware Face Recognition [5.5284501467256515]
We propose a simple and effective face recognition solution (QMag-Face) that combines a quality-aware comparison score with a recognition model based on a magnitude-aware angular margin loss.
The experiments conducted on several face recognition databases and benchmarks demonstrate that the introduced quality-awareness leads to consistent improvements in the recognition performance.
arXiv Detail & Related papers (2021-11-26T12:44:54Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z) - SER-FIQ: Unsupervised Estimation of Face Image Quality Based on
Stochastic Embedding Robustness [15.431761867166]
We propose a novel concept to measure face quality based on an arbitrary face recognition model.
We compare our proposed solution on two face embeddings against six state-of-the-art approaches from academia and industry.
arXiv Detail & Related papers (2020-03-20T16:50:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.