M^2 Deep-ID: A Novel Model for Multi-View Face Identification Using
Convolutional Deep Neural Networks
- URL: http://arxiv.org/abs/2001.07871v1
- Date: Wed, 22 Jan 2020 04:13:18 GMT
- Title: M^2 Deep-ID: A Novel Model for Multi-View Face Identification Using
Convolutional Deep Neural Networks
- Authors: Sara Shahsavarani, Morteza Analoui and Reza Shoja Ghiass
- Abstract summary: In this paper, we propose a new multi-view Deep Face Recognition (MVDFR) system to address the mentioned challenge.
In this context, multiple 2D images of each subject under different views are fed into the proposed deep neural network.
The experimental results indicate that our proposed method yields a 99.8% accuracy, while the state-of-the-art method achieves a 97% accuracy.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite significant advances in Deep Face Recognition (DFR) systems,
introducing new DFRs under specific constraints such as varying pose still
remains a big challenge. Most particularly, due to the 3D nature of a human
head, facial appearance of the same subject introduces a high intra-class
variability when projected to the camera image plane. In this paper, we propose
a new multi-view Deep Face Recognition (MVDFR) system to address the mentioned
challenge. In this context, multiple 2D images of each subject under different
views are fed into the proposed deep neural network with a unique design to
re-express the facial features in a single and more compact face descriptor,
which in turn, produces a more informative and abstract way for face
identification using convolutional neural networks. To extend the functionality
of our proposed system to multi-view facial images, the golden standard Deep-ID
model is modified in our proposed model. The experimental results indicate that
our proposed method yields a 99.8% accuracy, while the state-of-the-art method
achieves a 97% accuracy. We also gathered the Iran University of Science and
Technology (IUST) face database with 6552 images of 504 subjects to accomplish
our experiments.
Related papers
- DeepFidelity: Perceptual Forgery Fidelity Assessment for Deepfake
Detection [67.3143177137102]
Deepfake detection refers to detecting artificially generated or edited faces in images or videos.
We propose a novel Deepfake detection framework named DeepFidelity to adaptively distinguish real and fake faces.
arXiv Detail & Related papers (2023-12-07T07:19:45Z) - Physically-Based Face Rendering for NIR-VIS Face Recognition [165.54414962403555]
Near infrared (NIR) to Visible (VIS) face matching is challenging due to the significant domain gaps.
We propose a novel method for paired NIR-VIS facial image generation.
To facilitate the identity feature learning, we propose an IDentity-based Maximum Mean Discrepancy (ID-MMD) loss.
arXiv Detail & Related papers (2022-11-11T18:48:16Z) - GMFIM: A Generative Mask-guided Facial Image Manipulation Model for
Privacy Preservation [0.7734726150561088]
We propose a Generative Mask-guided Face Image Manipulation model based on GANs to apply imperceptible editing to the input face image.
Our model can achieve better performance against automated face recognition systems in comparison to the state-of-the-art methods.
arXiv Detail & Related papers (2022-01-10T14:09:14Z) - End2End Occluded Face Recognition by Masking Corrupted Features [82.27588990277192]
State-of-the-art general face recognition models do not generalize well to occluded face images.
This paper presents a novel face recognition method that is robust to occlusions based on a single end-to-end deep neural network.
Our approach, named FROM (Face Recognition with Occlusion Masks), learns to discover the corrupted features from the deep convolutional neural networks, and clean them by the dynamically learned masks.
arXiv Detail & Related papers (2021-08-21T09:08:41Z) - Facial Expressions Recognition with Convolutional Neural Networks [0.0]
We will be diving into implementing a system for recognition of facial expressions (FER) by leveraging neural networks.
We demonstrate a state-of-the-art single-network-accuracy of 70.10% on the FER2013 dataset without using any additional training data.
arXiv Detail & Related papers (2021-07-19T06:41:00Z) - Towards NIR-VIS Masked Face Recognition [47.00916333095693]
Near-infrared to visible (NIR-VIS) face recognition is the most common case in heterogeneous face recognition.
We propose a novel training method to maximize the mutual information shared by the face representation of two domains.
In addition, a 3D face reconstruction based approach is employed to synthesize masked face from the existing NIR image.
arXiv Detail & Related papers (2021-04-14T10:40:09Z) - FusiformNet: Extracting Discriminative Facial Features on Different
Levels [0.0]
I propose FusiformNet, a novel framework for feature extraction that leverages the nature of discriminative facial features.
FusiformNet achieved a state-of-the-art accuracy of 96.67% without labeled outside data, image augmentation, normalization, or special loss functions.
Considering its ability to extract both general and local facial features, the utility of FusiformNet may not be limited to facial recognition but also extend to other DNN-based tasks.
arXiv Detail & Related papers (2020-11-01T18:00:59Z) - The FaceChannel: A Fast & Furious Deep Neural Network for Facial
Expression Recognition [71.24825724518847]
Current state-of-the-art models for automatic Facial Expression Recognition (FER) are based on very deep neural networks that are effective but rather expensive to train.
We formalize the FaceChannel, a light-weight neural network that has much fewer parameters than common deep neural networks.
We demonstrate how our model achieves a comparable, if not better, performance to the current state-of-the-art in FER.
arXiv Detail & Related papers (2020-09-15T09:25:37Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.