Learning Diversified Feature Representations for Facial Expression
Recognition in the Wild
- URL: http://arxiv.org/abs/2210.09381v1
- Date: Mon, 17 Oct 2022 19:25:28 GMT
- Title: Learning Diversified Feature Representations for Facial Expression
Recognition in the Wild
- Authors: Negar Heidari, Alexandros Iosifidis
- Abstract summary: We propose a mechanism to diversify the features extracted by CNN layers of state-of-the-art facial expression recognition architectures.
Experimental results on three well-known facial expression recognition in-the-wild datasets, AffectNet, FER+, and RAF-DB, show the effectiveness of our method.
- Score: 97.14064057840089
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diversity of the features extracted by deep neural networks is important for
enhancing the model generalization ability and accordingly its performance in
different learning tasks. Facial expression recognition in the wild has
attracted interest in recent years due to the challenges existing in this area
for extracting discriminative and informative features from occluded images in
real-world scenarios. In this paper, we propose a mechanism to diversify the
features extracted by CNN layers of state-of-the-art facial expression
recognition architectures for enhancing the model capacity in learning
discriminative features. To evaluate the effectiveness of the proposed
approach, we incorporate this mechanism in two state-of-the-art models to (i)
diversify local/global features in an attention-based model and (ii) diversify
features extracted by different learners in an ensemble-based model.
Experimental results on three well-known facial expression recognition
in-the-wild datasets, AffectNet, FER+, and RAF-DB, show the effectiveness of
our method, achieving the state-of-the-art performance of 89.99% on RAF-DB,
89.34% on FER+ and the competitive accuracy of 60.02% on AffectNet dataset.
Related papers
- Emotic Masked Autoencoder with Attention Fusion for Facial Expression Recognition [1.4374467687356276]
This paper presents an innovative approach integrating the MAE-Face self-supervised learning (SSL) method and multi-view Fusion Attention mechanism for expression classification.
We suggest easy-to-implement and no-training frameworks aimed at highlighting key facial features to determine if such features can serve as guides for the model.
The efficacy of this method is validated by improvements in model performance on the Aff-wild2 dataset.
arXiv Detail & Related papers (2024-03-19T16:21:47Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Multi-Branch Deep Radial Basis Function Networks for Facial Emotion
Recognition [80.35852245488043]
We propose a CNN based architecture enhanced with multiple branches formed by radial basis function (RBF) units.
RBF units capture local patterns shared by similar instances using an intermediate representation.
We show it is the incorporation of local information what makes the proposed model competitive.
arXiv Detail & Related papers (2021-09-07T21:05:56Z) - Progressive Spatio-Temporal Bilinear Network with Monte Carlo Dropout
for Landmark-based Facial Expression Recognition with Uncertainty Estimation [93.73198973454944]
The performance of our method is evaluated on three widely used datasets.
It is comparable to that of video-based state-of-the-art methods while it has much less complexity.
arXiv Detail & Related papers (2021-06-08T13:40:30Z) - Exploiting Emotional Dependencies with Graph Convolutional Networks for
Facial Expression Recognition [31.40575057347465]
This paper proposes a novel multi-task learning framework to recognize facial expressions in-the-wild.
A shared feature representation is learned for both discrete and continuous recognition in a MTL setting.
The results of our experiments show that our method outperforms the current state-of-the-art methods on discrete FER.
arXiv Detail & Related papers (2021-06-07T10:20:05Z) - Feature Decomposition and Reconstruction Learning for Effective Facial
Expression Recognition [80.17419621762866]
We propose a novel Feature Decomposition and Reconstruction Learning (FDRL) method for effective facial expression recognition.
FDRL consists of two crucial networks: a Feature Decomposition Network (FDN) and a Feature Reconstruction Network (FRN)
arXiv Detail & Related papers (2021-04-12T02:22:45Z) - A Multi-resolution Approach to Expression Recognition in the Wild [9.118706387430883]
We propose a multi-resolution approach to solve the Facial Expression Recognition task.
We ground our intuition on the observation that often faces images are acquired at different resolutions.
To our aim, we use a ResNet-like architecture, equipped with Squeeze-and-Excitation blocks, trained on the Affect-in-the-Wild 2 dataset.
arXiv Detail & Related papers (2021-03-09T21:21:02Z) - The FaceChannel: A Fast & Furious Deep Neural Network for Facial
Expression Recognition [71.24825724518847]
Current state-of-the-art models for automatic Facial Expression Recognition (FER) are based on very deep neural networks that are effective but rather expensive to train.
We formalize the FaceChannel, a light-weight neural network that has much fewer parameters than common deep neural networks.
We demonstrate how our model achieves a comparable, if not better, performance to the current state-of-the-art in FER.
arXiv Detail & Related papers (2020-09-15T09:25:37Z) - Two-Level Adversarial Visual-Semantic Coupling for Generalized Zero-shot
Learning [21.89909688056478]
We propose a new two-level joint idea to augment the generative network with an inference network during training.
This provides strong cross-modal interaction for effective transfer of knowledge between visual and semantic domains.
We evaluate our approach on four benchmark datasets against several state-of-the-art methods, and show its performance.
arXiv Detail & Related papers (2020-07-15T15:34:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.