Occlusion-Adaptive Deep Network for Robust Facial Expression Recognition
- URL: http://arxiv.org/abs/2005.06040v1
- Date: Tue, 12 May 2020 20:42:55 GMT
- Title: Occlusion-Adaptive Deep Network for Robust Facial Expression Recognition
- Authors: Hui Ding, Peng Zhou, and Rama Chellappa
- Abstract summary: We propose a landmark-guided attention branch to find and discard corrupted features from occluded regions.
An attention map is first generated to indicate if a specific facial part is occluded and guide our model to attend to non-occluded regions.
This results in more diverse and discriminative features, enabling the expression recognition system to recover even though the face is partially occluded.
- Score: 56.11054589916299
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recognizing the expressions of partially occluded faces is a challenging
computer vision problem. Previous expression recognition methods, either
overlooked this issue or resolved it using extreme assumptions. Motivated by
the fact that the human visual system is adept at ignoring the occlusion and
focus on non-occluded facial areas, we propose a landmark-guided attention
branch to find and discard corrupted features from occluded regions so that
they are not used for recognition. An attention map is first generated to
indicate if a specific facial part is occluded and guide our model to attend to
non-occluded regions. To further improve robustness, we propose a facial region
branch to partition the feature maps into non-overlapping facial blocks and
task each block to predict the expression independently. This results in more
diverse and discriminative features, enabling the expression recognition system
to recover even though the face is partially occluded. Depending on the
synergistic effects of the two branches, our occlusion-adaptive deep network
significantly outperforms state-of-the-art methods on two challenging
in-the-wild benchmark datasets and three real-world occluded expression
datasets.
Related papers
- CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - End2End Occluded Face Recognition by Masking Corrupted Features [82.27588990277192]
State-of-the-art general face recognition models do not generalize well to occluded face images.
This paper presents a novel face recognition method that is robust to occlusions based on a single end-to-end deep neural network.
Our approach, named FROM (Face Recognition with Occlusion Masks), learns to discover the corrupted features from the deep convolutional neural networks, and clean them by the dynamically learned masks.
arXiv Detail & Related papers (2021-08-21T09:08:41Z) - Attention-based Partial Face Recognition [6.815997591230765]
We propose a novel approach to partial face recognition capable of recognizing faces with different occluded areas.
We achieve this by combining attentional pooling of a ResNet's intermediate feature maps with a separate aggregation module.
Our thorough analysis demonstrates that we outperform all baselines under multiple benchmark protocols.
arXiv Detail & Related papers (2021-06-11T14:16:06Z) - Facial Expressions as a Vulnerability in Face Recognition [73.85525896663371]
This work explores facial expression bias as a security vulnerability of face recognition systems.
We present a comprehensive analysis of how facial expression bias impacts the performance of face recognition technologies.
arXiv Detail & Related papers (2020-11-17T18:12:41Z) - Face Hallucination via Split-Attention in Split-Attention Network [58.30436379218425]
convolutional neural networks (CNNs) have been widely employed to promote the face hallucination.
We propose a novel external-internal split attention group (ESAG) to take into account the overall facial profile and fine texture details simultaneously.
By fusing the features from these two paths, the consistency of facial structure and the fidelity of facial details are strengthened.
arXiv Detail & Related papers (2020-10-22T10:09:31Z) - A survey of face recognition techniques under occlusion [4.10247419557141]
occluded face recognition is imperative to exploit the full potential of face recognition for real-world applications.
We present how existing face recognition methods cope with the occlusion problem and classify them into three categories.
We analyze the motivations, innovations, pros and cons, and the performance of representative approaches for comparison.
arXiv Detail & Related papers (2020-06-19T20:44:02Z) - SD-GAN: Structural and Denoising GAN reveals facial parts under
occlusion [7.284661356980246]
We propose a generative model to reconstruct the missing parts of the face which are under occlusion.
A novel adversarial training algorithm has been designed for a bimodal mutually exclusive Generative Adversarial Network (GAN) model.
Our proposed technique outperforms the competing methods by a considerable margin, even for boosting the performance of Face Recognition.
arXiv Detail & Related papers (2020-02-19T21:12:49Z) - Dual-Attention GAN for Large-Pose Face Frontalization [59.689836951934694]
We present a novel Dual-Attention Generative Adversarial Network (DA-GAN) for photo-realistic face frontalization.
Specifically, a self-attention-based generator is introduced to integrate local features with their long-range dependencies.
A novel face-attention-based discriminator is applied to emphasize local features of face regions.
arXiv Detail & Related papers (2020-02-17T20:00:56Z) - Lossless Attention in Convolutional Networks for Facial Expression
Recognition in the Wild [26.10189921938026]
We propose a Lossless Attention Model (LLAM) for convolutional neural networks (CNN) to extract attention-aware features from faces.
We participate in the seven basic expression classification sub-challenges of FG-2020 Affective Behavior Analysis in-the-wild Challenge.
And we validate our method on the Aff-Wild2 datasets released by the Challenge.
arXiv Detail & Related papers (2020-01-31T14:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.