IdentiFace : A VGG Based Multimodal Facial Biometric System
- URL: http://arxiv.org/abs/2401.01227v2
- Date: Wed, 10 Jan 2024 12:13:20 GMT
- Title: IdentiFace : A VGG Based Multimodal Facial Biometric System
- Authors: Mahmoud Rabea, Hanya Ahmed, Sohaila Mahmoud and Nourhan Sayed
- Abstract summary: "IdentiFace" is a multimodal facial biometric system that combines the core of facial recognition with some of the most important soft biometric traits such as gender, face shape, and emotion.
For the recognition problem, we acquired a 99.2% test accuracy for five classes with high intra-class variations using data collected from the FERET database.
We were also able to achieve a testing accuracy of 88.03% in the face-shape problem using the celebrity face-shape dataset.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The development of facial biometric systems has contributed greatly to the
development of the computer vision field. Nowadays, there's always a need to
develop a multimodal system that combines multiple biometric traits in an
efficient, meaningful way. In this paper, we introduce "IdentiFace" which is a
multimodal facial biometric system that combines the core of facial recognition
with some of the most important soft biometric traits such as gender, face
shape, and emotion. We also focused on developing the system using only VGG-16
inspired architecture with minor changes across different subsystems. This
unification allows for simpler integration across modalities. It makes it
easier to interpret the learned features between the tasks which gives a good
indication about the decision-making process across the facial modalities and
potential connection. For the recognition problem, we acquired a 99.2% test
accuracy for five classes with high intra-class variations using data collected
from the FERET database[1]. We achieved 99.4% on our dataset and 95.15% on the
public dataset[2] in the gender recognition problem. We were also able to
achieve a testing accuracy of 88.03% in the face-shape problem using the
celebrity face-shape dataset[3]. Finally, we achieved a decent testing accuracy
of 66.13% in the emotion task which is considered a very acceptable accuracy
compared to related work on the FER2013 dataset[4].
Related papers
- ConsistentID: Portrait Generation with Multimodal Fine-Grained Identity Preserving [66.09976326184066]
ConsistentID is an innovative method crafted for diverseidentity-preserving portrait generation under fine-grained multimodal facial prompts.
We present a fine-grained portrait dataset, FGID, with over 500,000 facial images, offering greater diversity and comprehensiveness than existing public facial datasets.
arXiv Detail & Related papers (2024-04-25T17:23:43Z) - Facial Emotion Recognition Under Mask Coverage Using a Data Augmentation
Technique [0.0]
We propose a facial emotion recognition system capable of recognizing emotions from individuals wearing different face masks.
We evaluated the effectiveness of four convolutional neural networks that were trained using transfer learning.
The Resnet50 has demonstrated superior performance, with accuracies of 73.68% for the person-dependent mode and 59.57% for the person-independent mode.
arXiv Detail & Related papers (2023-12-03T09:50:46Z) - SwinFace: A Multi-task Transformer for Face Recognition, Expression
Recognition, Age Estimation and Attribute Estimation [60.94239810407917]
This paper presents a multi-purpose algorithm for simultaneous face recognition, facial expression recognition, age estimation, and face attribute estimation based on a single Swin Transformer.
To address the conflicts among multiple tasks, a Multi-Level Channel Attention (MLCA) module is integrated into each task-specific analysis.
Experiments show that the proposed model has a better understanding of the face and achieves excellent performance for all tasks.
arXiv Detail & Related papers (2023-08-22T15:38:39Z) - One-Shot Learning for Periocular Recognition: Exploring the Effect of
Domain Adaptation and Data Bias on Deep Representations [59.17685450892182]
We investigate the behavior of deep representations in widely used CNN models under extreme data scarcity for One-Shot periocular recognition.
We improved state-of-the-art results that made use of networks trained with biometric datasets with millions of images.
Traditional algorithms like SIFT can outperform CNNs in situations with limited data.
arXiv Detail & Related papers (2023-07-11T09:10:16Z) - FarSight: A Physics-Driven Whole-Body Biometric System at Large Distance
and Altitude [67.55994773068191]
This paper presents the end-to-end design, development and evaluation of FarSight.
FarSight is an innovative software system designed for whole-body (fusion of face, gait and body shape) biometric recognition.
We test FarSight's effectiveness using the newly acquired IARPA Biometric Recognition and Identification at Altitude and Range dataset.
arXiv Detail & Related papers (2023-06-29T16:14:27Z) - Analysis of Recent Trends in Face Recognition Systems [0.0]
Due to inter-class similarities and intra-class variations, face recognition systems generate false match and false non-match errors respectively.
Recent research focuses on improving the robustness of extracted features and the pre-processing algorithms to enhance recognition accuracy.
arXiv Detail & Related papers (2023-04-23T18:55:45Z) - Facial Soft Biometrics for Recognition in the Wild: Recent Works,
Annotation, and COTS Evaluation [63.05890836038913]
We study the role of soft biometrics to enhance person recognition systems in unconstrained scenarios.
We consider two assumptions: 1) manual estimation of soft biometrics and 2) automatic estimation from two commercial off-the-shelf systems.
Experiments are carried out fusing soft biometrics with two state-of-the-art face recognition systems based on deep learning.
arXiv Detail & Related papers (2022-10-24T11:29:57Z) - Deep learning for identification and face, gender, expression
recognition under constraints [1.2647816797166165]
Deep convolutional neural network (CNN) is used in this work to extract the features from veiled-person face images.
The main objective of this work is to test the ability of deep learning based automated computer system to identify not only persons, but also to perform recognition of gender, age, and facial expressions such as eye smile.
arXiv Detail & Related papers (2021-11-02T22:45:09Z) - Towards a Real-Time Facial Analysis System [13.649384403827359]
We present a system-level design of a real-time facial analysis system.
With a collection of deep neural networks for object detection, classification, and regression, the system recognizes age, gender, facial expression, and facial similarity for each person that appears in the camera view.
Results on common off-the-shelf architecture show that the system's accuracy is comparable to the state-of-the-art methods, and the recognition speed satisfies real-time requirements.
arXiv Detail & Related papers (2021-09-21T18:27:15Z) - Investigating the Impact of Inclusion in Face Recognition Training Data
on Individual Face Identification [93.5538147928669]
We audit ArcFace, a state-of-the-art, open source face recognition system, in a large-scale face identification experiment with more than one million distractor images.
We find a Rank-1 face identification accuracy of 79.71% for individuals present in the model's training data and an accuracy of 75.73% for those not present.
arXiv Detail & Related papers (2020-01-09T15:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.