AU-Expression Knowledge Constrained Representation Learning for Facial
Expression Recognition
- URL: http://arxiv.org/abs/2012.14587v2
- Date: Fri, 2 Apr 2021 07:00:09 GMT
- Title: AU-Expression Knowledge Constrained Representation Learning for Facial
Expression Recognition
- Authors: Tao Pu, Tianshui Chen, Yuan Xie, Hefeng Wu, and Liang Lin
- Abstract summary: We propose an AU-Expression Knowledge Constrained Representation Learning (AUE-CRL) framework to learn the AU representations without AU annotations and adaptively use representations to facilitate facial expression recognition.
We conduct experiments on the challenging uncontrolled datasets to demonstrate the superiority of the proposed framework over current state-of-the-art methods.
- Score: 79.8779790682205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recognizing human emotion/expressions automatically is quite an expected
ability for intelligent robotics, as it can promote better communication and
cooperation with humans. Current deep-learning-based algorithms may achieve
impressive performance in some lab-controlled environments, but they always
fail to recognize the expressions accurately for the uncontrolled in-the-wild
situation. Fortunately, facial action units (AU) describe subtle facial
behaviors, and they can help distinguish uncertain and ambiguous expressions.
In this work, we explore the correlations among the action units and facial
expressions, and devise an AU-Expression Knowledge Constrained Representation
Learning (AUE-CRL) framework to learn the AU representations without AU
annotations and adaptively use representations to facilitate facial expression
recognition. Specifically, it leverages AU-expression correlations to guide the
learning of the AU classifiers, and thus it can obtain AU representations
without incurring any AU annotations. Then, it introduces a knowledge-guided
attention mechanism that mines useful AU representations under the constraint
of AU-expression correlations. In this way, the framework can capture local
discriminative and complementary features to enhance facial representation for
facial expression recognition. We conduct experiments on the challenging
uncontrolled datasets to demonstrate the superiority of the proposed framework
over current state-of-the-art methods. Codes and trained models are available
at https://github.com/HCPLab-SYSU/AUE-CRL.
Related papers
- Open-Set Video-based Facial Expression Recognition with Human Expression-sensitive Prompting [28.673734895558322]
We introduce a challenging Open-set Video-based Facial Expression Recognition task, aiming at identifying unknown human facial expressions.
Existing approaches address open-set recognition by leveraging large-scale vision-language models like CLIP to identify unseen classes.
We propose a novel Human Expression-Sensitive Prompting (HESP) mechanism to significantly enhance CLIP's ability to model video-based facial expression details effectively.
arXiv Detail & Related papers (2024-04-26T01:21:08Z) - Emotic Masked Autoencoder with Attention Fusion for Facial Expression Recognition [1.4374467687356276]
This paper presents an innovative approach integrating the MAE-Face self-supervised learning (SSL) method and multi-view Fusion Attention mechanism for expression classification.
We suggest easy-to-implement and no-training frameworks aimed at highlighting key facial features to determine if such features can serve as guides for the model.
The efficacy of this method is validated by improvements in model performance on the Aff-wild2 dataset.
arXiv Detail & Related papers (2024-03-19T16:21:47Z) - Contrastive Learning of Person-independent Representations for Facial
Action Unit Detection [70.60587475492065]
We formulate the self-supervised AU representation learning signals in two-fold.
We contrast learn the AU representation within a video clip and devise a cross-identity reconstruction mechanism to learn the person-independent representations.
Our method outperforms other contrastive learning methods and significantly closes the performance gap between the self-supervised and supervised AU detection approaches.
arXiv Detail & Related papers (2024-03-06T01:49:28Z) - Guided Interpretable Facial Expression Recognition via Spatial Action Unit Cues [55.97779732051921]
A new learning strategy is proposed to explicitly incorporate au cues into classifier training.
We show that our strategy can improve layer-wise interpretability without degrading classification performance.
arXiv Detail & Related papers (2024-02-01T02:13:49Z) - Global-to-local Expression-aware Embeddings for Facial Action Unit
Detection [18.629509376315752]
We propose a novel fine-grained textslGlobal Expression representation to capture subtle and continuous facial movements.
It consists of an AU feature map extractor and a corresponding AU mask extractor.
Our method validly outperforms previous works and achieves state-of-the-art performances on widely-used face datasets.
arXiv Detail & Related papers (2022-10-27T04:00:04Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - An Attribute-Aligned Strategy for Learning Speech Representation [57.891727280493015]
We propose an attribute-aligned learning strategy to derive speech representation that can flexibly address these issues by attribute-selection mechanism.
Specifically, we propose a layered-representation variational autoencoder (LR-VAE), which factorizes speech representation into attribute-sensitive nodes.
Our proposed method achieves competitive performances on identity-free SER and a better performance on emotionless SV.
arXiv Detail & Related papers (2021-06-05T06:19:14Z) - Exploring Adversarial Learning for Deep Semi-Supervised Facial Action
Unit Recognition [38.589141957375226]
We propose a deep semi-supervised framework for facial action unit recognition from partially AU-labeled facial images.
The proposed approach successfully captures AU distributions through adversarial learning and outperforms state-of-the-art AU recognition work.
arXiv Detail & Related papers (2021-06-04T04:50:00Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.