AU-Expression Knowledge Constrained Representation Learning for Facial
Expression Recognition
- URL: http://arxiv.org/abs/2012.14587v2
- Date: Fri, 2 Apr 2021 07:00:09 GMT
- Title: AU-Expression Knowledge Constrained Representation Learning for Facial
Expression Recognition
- Authors: Tao Pu, Tianshui Chen, Yuan Xie, Hefeng Wu, and Liang Lin
- Abstract summary: We propose an AU-Expression Knowledge Constrained Representation Learning (AUE-CRL) framework to learn the AU representations without AU annotations and adaptively use representations to facilitate facial expression recognition.
We conduct experiments on the challenging uncontrolled datasets to demonstrate the superiority of the proposed framework over current state-of-the-art methods.
- Score: 79.8779790682205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recognizing human emotion/expressions automatically is quite an expected
ability for intelligent robotics, as it can promote better communication and
cooperation with humans. Current deep-learning-based algorithms may achieve
impressive performance in some lab-controlled environments, but they always
fail to recognize the expressions accurately for the uncontrolled in-the-wild
situation. Fortunately, facial action units (AU) describe subtle facial
behaviors, and they can help distinguish uncertain and ambiguous expressions.
In this work, we explore the correlations among the action units and facial
expressions, and devise an AU-Expression Knowledge Constrained Representation
Learning (AUE-CRL) framework to learn the AU representations without AU
annotations and adaptively use representations to facilitate facial expression
recognition. Specifically, it leverages AU-expression correlations to guide the
learning of the AU classifiers, and thus it can obtain AU representations
without incurring any AU annotations. Then, it introduces a knowledge-guided
attention mechanism that mines useful AU representations under the constraint
of AU-expression correlations. In this way, the framework can capture local
discriminative and complementary features to enhance facial representation for
facial expression recognition. We conduct experiments on the challenging
uncontrolled datasets to demonstrate the superiority of the proposed framework
over current state-of-the-art methods. Codes and trained models are available
at https://github.com/HCPLab-SYSU/AUE-CRL.
Related papers
- Spatial Action Unit Cues for Interpretable Deep Facial Expression Recognition [55.97779732051921]
State-of-the-art classifiers for facial expression recognition (FER) lack interpretability, an important feature for end-users.
A new learning strategy is proposed to explicitly incorporate AU cues into classifier training, allowing to train deep interpretable models.
Our new strategy is generic, and can be applied to any deep CNN- or transformer-based classifier without requiring any architectural change or significant additional training time.
arXiv Detail & Related papers (2024-10-01T10:42:55Z) - Towards End-to-End Explainable Facial Action Unit Recognition via Vision-Language Joint Learning [48.70249675019288]
We propose an end-to-end Vision-Language joint learning network for explainable facial action units (AUs) recognition.
The proposed approach achieves superior performance over the state-of-the-art methods on most metrics.
arXiv Detail & Related papers (2024-08-01T15:35:44Z) - Emotic Masked Autoencoder with Attention Fusion for Facial Expression Recognition [1.4374467687356276]
This paper presents an innovative approach integrating the MAE-Face self-supervised learning (SSL) method and multi-view Fusion Attention mechanism for expression classification.
We suggest easy-to-implement and no-training frameworks aimed at highlighting key facial features to determine if such features can serve as guides for the model.
The efficacy of this method is validated by improvements in model performance on the Aff-wild2 dataset.
arXiv Detail & Related papers (2024-03-19T16:21:47Z) - Contrastive Learning of Person-independent Representations for Facial
Action Unit Detection [70.60587475492065]
We formulate the self-supervised AU representation learning signals in two-fold.
We contrast learn the AU representation within a video clip and devise a cross-identity reconstruction mechanism to learn the person-independent representations.
Our method outperforms other contrastive learning methods and significantly closes the performance gap between the self-supervised and supervised AU detection approaches.
arXiv Detail & Related papers (2024-03-06T01:49:28Z) - Guided Interpretable Facial Expression Recognition via Spatial Action Unit Cues [55.97779732051921]
A new learning strategy is proposed to explicitly incorporate au cues into classifier training.
We show that our strategy can improve layer-wise interpretability without degrading classification performance.
arXiv Detail & Related papers (2024-02-01T02:13:49Z) - Global-to-local Expression-aware Embeddings for Facial Action Unit
Detection [18.629509376315752]
We propose a novel fine-grained textslGlobal Expression representation to capture subtle and continuous facial movements.
It consists of an AU feature map extractor and a corresponding AU mask extractor.
Our method validly outperforms previous works and achieves state-of-the-art performances on widely-used face datasets.
arXiv Detail & Related papers (2022-10-27T04:00:04Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Exploring Adversarial Learning for Deep Semi-Supervised Facial Action
Unit Recognition [38.589141957375226]
We propose a deep semi-supervised framework for facial action unit recognition from partially AU-labeled facial images.
The proposed approach successfully captures AU distributions through adversarial learning and outperforms state-of-the-art AU recognition work.
arXiv Detail & Related papers (2021-06-04T04:50:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.