Towards NIR-VIS Masked Face Recognition
- URL: http://arxiv.org/abs/2104.06761v1
- Date: Wed, 14 Apr 2021 10:40:09 GMT
- Title: Towards NIR-VIS Masked Face Recognition
- Authors: Hang Du, Hailin Shi, Yinglu Liu, Dan Zeng, and Tao Mei
- Abstract summary: Near-infrared to visible (NIR-VIS) face recognition is the most common case in heterogeneous face recognition.
We propose a novel training method to maximize the mutual information shared by the face representation of two domains.
In addition, a 3D face reconstruction based approach is employed to synthesize masked face from the existing NIR image.
- Score: 47.00916333095693
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Near-infrared to visible (NIR-VIS) face recognition is the most common case
in heterogeneous face recognition, which aims to match a pair of face images
captured from two different modalities. Existing deep learning based methods
have made remarkable progress in NIR-VIS face recognition, while it encounters
certain newly-emerged difficulties during the pandemic of COVID-19, since
people are supposed to wear facial masks to cut off the spread of the virus. We
define this task as NIR-VIS masked face recognition, and find it problematic
with the masked face in the NIR probe image. First, the lack of masked face
data is a challenging issue for the network training. Second, most of the
facial parts (cheeks, mouth, nose etc.) are fully occluded by the mask, which
leads to a large amount of loss of information. Third, the domain gap still
exists in the remaining facial parts. In such scenario, the existing methods
suffer from significant performance degradation caused by the above issues. In
this paper, we aim to address the challenge of NIR-VIS masked face recognition
from the perspectives of training data and training method. Specifically, we
propose a novel heterogeneous training method to maximize the mutual
information shared by the face representation of two domains with the help of
semi-siamese networks. In addition, a 3D face reconstruction based approach is
employed to synthesize masked face from the existing NIR image. Resorting to
these practices, our solution provides the domain-invariant face representation
which is also robust to the mask occlusion. Extensive experiments on three
NIR-VIS face datasets demonstrate the effectiveness and
cross-dataset-generalization capacity of our method.
Related papers
- Seeing through the Mask: Multi-task Generative Mask Decoupling Face
Recognition [47.248075664420874]
Current general face recognition system suffers from serious performance degradation when encountering occluded scenes.
This paper proposes a Multi-task gEnerative mask dEcoupling face Recognition (MEER) network to jointly handle these two tasks.
We first present a novel mask decoupling module to disentangle mask and identity information, which makes the network obtain purer identity features from visible facial components.
arXiv Detail & Related papers (2023-11-20T03:23:03Z) - Physically-Based Face Rendering for NIR-VIS Face Recognition [165.54414962403555]
Near infrared (NIR) to Visible (VIS) face matching is challenging due to the significant domain gaps.
We propose a novel method for paired NIR-VIS facial image generation.
To facilitate the identity feature learning, we propose an IDentity-based Maximum Mean Discrepancy (ID-MMD) loss.
arXiv Detail & Related papers (2022-11-11T18:48:16Z) - HiMFR: A Hybrid Masked Face Recognition Through Face Inpainting [0.7868449549351486]
We propose an end-to-end hybrid masked face recognition system, namely HiMFR.
Masked face detector module applies a pretrained Vision Transformer to detect whether faces are covered with masked or not.
Inpainting module uses a fine-tune image inpainting model based on a Generative Adversarial Network (GAN) to restore faces.
Finally, the hybrid face recognition module based on ViT with an EfficientNetB3 backbone recognizes the faces.
arXiv Detail & Related papers (2022-09-19T11:26:49Z) - End2End Occluded Face Recognition by Masking Corrupted Features [82.27588990277192]
State-of-the-art general face recognition models do not generalize well to occluded face images.
This paper presents a novel face recognition method that is robust to occlusions based on a single end-to-end deep neural network.
Our approach, named FROM (Face Recognition with Occlusion Masks), learns to discover the corrupted features from the deep convolutional neural networks, and clean them by the dynamically learned masks.
arXiv Detail & Related papers (2021-08-21T09:08:41Z) - Efficient Masked Face Recognition Method during the COVID-19 Pandemic [4.13365552362244]
coronavirus disease (COVID-19) is an unparalleled crisis leading to a huge number of casualties and security problems.
In order to reduce the spread of coronavirus, people often wear masks to protect themselves.
This makes face recognition a very difficult task since certain parts of the face are hidden.
arXiv Detail & Related papers (2021-05-07T01:32:37Z) - Unmasking Face Embeddings by Self-restrained Triplet Loss for Accurate
Masked Face Recognition [6.865656740940772]
We present a solution to improve the masked face recognition performance.
Specifically, we propose the Embedding Unmasking Model (EUM) operated on top of existing face recognition models.
We also propose a novel loss function, the Self-restrained Triplet (SRT), which enabled the EUM to produce embeddings similar to these of unmasked faces of the same identities.
arXiv Detail & Related papers (2021-03-02T13:43:11Z) - Face Hallucination via Split-Attention in Split-Attention Network [58.30436379218425]
convolutional neural networks (CNNs) have been widely employed to promote the face hallucination.
We propose a novel external-internal split attention group (ESAG) to take into account the overall facial profile and fine texture details simultaneously.
By fusing the features from these two paths, the consistency of facial structure and the fidelity of facial details are strengthened.
arXiv Detail & Related papers (2020-10-22T10:09:31Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.