Seeing through the Mask: Multi-task Generative Mask Decoupling Face
Recognition
- URL: http://arxiv.org/abs/2311.11512v1
- Date: Mon, 20 Nov 2023 03:23:03 GMT
- Title: Seeing through the Mask: Multi-task Generative Mask Decoupling Face
Recognition
- Authors: Zhaohui Wang, Sufang Zhang, Jianteng Peng, Xinyi Wang, Yandong Guo
- Abstract summary: Current general face recognition system suffers from serious performance degradation when encountering occluded scenes.
This paper proposes a Multi-task gEnerative mask dEcoupling face Recognition (MEER) network to jointly handle these two tasks.
We first present a novel mask decoupling module to disentangle mask and identity information, which makes the network obtain purer identity features from visible facial components.
- Score: 47.248075664420874
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The outbreak of COVID-19 pandemic make people wear masks more frequently than
ever. Current general face recognition system suffers from serious performance
degradation,when encountering occluded scenes. The potential reason is that
face features are corrupted by occlusions on key facial regions. To tackle this
problem, previous works either extract identity-related embeddings on feature
level by additional mask prediction, or restore the occluded facial part by
generative models. However, the former lacks visual results for model
interpretation, while the latter suffers from artifacts which may affect
downstream recognition. Therefore, this paper proposes a Multi-task gEnerative
mask dEcoupling face Recognition (MEER) network to jointly handle these two
tasks, which can learn occlusionirrelevant and identity-related representation
while achieving unmasked face synthesis. We first present a novel mask
decoupling module to disentangle mask and identity information, which makes the
network obtain purer identity features from visible facial components. Then, an
unmasked face is restored by a joint-training strategy, which will be further
used to refine the recognition network with an id-preserving loss. Experiments
on masked face recognition under realistic and synthetic occlusions benchmarks
demonstrate that the MEER can outperform the state-ofthe-art methods.
Related papers
- ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification [60.73617868629575]
misuse of deep learning-based facial manipulation poses a potential threat to civil rights.
To prevent this fraud at its source, proactive defense technology was proposed to disrupt the manipulation process.
We propose a novel universal framework for combating facial manipulation, called ID-Guard.
arXiv Detail & Related papers (2024-09-20T09:30:08Z) - A Unified Framework for Masked and Mask-Free Face Recognition via
Feature Rectification [19.417191498842044]
We propose a unified framework, named Face Feature Rectification Network (FFR-Net), for recognizing both masked and mask-free faces alike.
We introduce rectification blocks to rectify features extracted by a state-of-the-art recognition model, in both spatial and channel dimensions.
Experiments show that our framework can learn a rectified feature space for recognizing both masked and mask-free faces effectively.
arXiv Detail & Related papers (2022-02-15T12:37:59Z) - MaskMTL: Attribute prediction in masked facial images with deep
multitask learning [9.91045425400833]
This paper presents a deep Multi-Task Learning (MTL) approach to jointly estimate various heterogeneous attributes from a single masked facial image.
The proposed approach supersedes in performance to other competing techniques.
arXiv Detail & Related papers (2022-01-09T13:03:29Z) - Mask-invariant Face Recognition through Template-level Knowledge
Distillation [3.727773051465455]
Masks affect the performance of previous face recognition systems.
We propose a mask-invariant face recognition solution (MaskInv)
In addition to the distilled knowledge, the student network benefits from additional guidance by margin-based identity classification loss.
arXiv Detail & Related papers (2021-12-10T16:19:28Z) - MLFW: A Database for Face Recognition on Masked Faces [56.441078419992046]
Masked LFW (MLFW) is a tool to generate masked faces from unmasked faces automatically.
The recognition accuracy of SOTA models declines 5%-16% on MLFW database compared with the accuracy on the original images.
arXiv Detail & Related papers (2021-09-13T09:30:10Z) - End2End Occluded Face Recognition by Masking Corrupted Features [82.27588990277192]
State-of-the-art general face recognition models do not generalize well to occluded face images.
This paper presents a novel face recognition method that is robust to occlusions based on a single end-to-end deep neural network.
Our approach, named FROM (Face Recognition with Occlusion Masks), learns to discover the corrupted features from the deep convolutional neural networks, and clean them by the dynamically learned masks.
arXiv Detail & Related papers (2021-08-21T09:08:41Z) - Towards NIR-VIS Masked Face Recognition [47.00916333095693]
Near-infrared to visible (NIR-VIS) face recognition is the most common case in heterogeneous face recognition.
We propose a novel training method to maximize the mutual information shared by the face representation of two domains.
In addition, a 3D face reconstruction based approach is employed to synthesize masked face from the existing NIR image.
arXiv Detail & Related papers (2021-04-14T10:40:09Z) - Unmasking Face Embeddings by Self-restrained Triplet Loss for Accurate
Masked Face Recognition [6.865656740940772]
We present a solution to improve the masked face recognition performance.
Specifically, we propose the Embedding Unmasking Model (EUM) operated on top of existing face recognition models.
We also propose a novel loss function, the Self-restrained Triplet (SRT), which enabled the EUM to produce embeddings similar to these of unmasked faces of the same identities.
arXiv Detail & Related papers (2021-03-02T13:43:11Z) - Face Hallucination via Split-Attention in Split-Attention Network [58.30436379218425]
convolutional neural networks (CNNs) have been widely employed to promote the face hallucination.
We propose a novel external-internal split attention group (ESAG) to take into account the overall facial profile and fine texture details simultaneously.
By fusing the features from these two paths, the consistency of facial structure and the fidelity of facial details are strengthened.
arXiv Detail & Related papers (2020-10-22T10:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.