CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition
- URL: http://arxiv.org/abs/2208.07221v1
- Date: Wed, 10 Aug 2022 15:46:05 GMT
- Title: CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition
- Authors: Pablo Barros, Alessandra Sciutti
- Abstract summary: We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
- Score: 80.07590100872548
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Current facial expression recognition systems demand an expensive re-training
routine when deployed to different scenarios than they were trained for.
Biasing them towards learning specific facial characteristics, instead of
performing typical transfer learning methods, might help these systems to
maintain high performance in different tasks, but with a reduced training
effort. In this paper, we propose Contrastive Inhibitory Adaptati On (CIAO), a
mechanism that adapts the last layer of facial encoders to depict specific
affective characteristics on different datasets. CIAO presents an improvement
in facial expression recognition performance over six different datasets with
very unique affective representations, in particular when compared with
state-of-the-art models. In our discussions, we make an in-depth analysis of
how the learned high-level facial features are represented, and how they
contribute to each individual dataset's characteristics. We finalize our study
by discussing how CIAO positions itself within the range of recent findings on
non-universal facial expressions perception, and its impact on facial
expression recognition research.
Related papers
- Emotic Masked Autoencoder with Attention Fusion for Facial Expression Recognition [1.4374467687356276]
This paper presents an innovative approach integrating the MAE-Face self-supervised learning (SSL) method and multi-view Fusion Attention mechanism for expression classification.
We suggest easy-to-implement and no-training frameworks aimed at highlighting key facial features to determine if such features can serve as guides for the model.
The efficacy of this method is validated by improvements in model performance on the Aff-wild2 dataset.
arXiv Detail & Related papers (2024-03-19T16:21:47Z) - Contrastive Learning of View-Invariant Representations for Facial
Expressions Recognition [27.75143621836449]
We propose ViewFX, a novel view-invariant FER framework based on contrastive learning.
We test the proposed framework on two public multi-view facial expression recognition datasets.
arXiv Detail & Related papers (2023-11-12T14:05:09Z) - Learning Diversified Feature Representations for Facial Expression
Recognition in the Wild [97.14064057840089]
We propose a mechanism to diversify the features extracted by CNN layers of state-of-the-art facial expression recognition architectures.
Experimental results on three well-known facial expression recognition in-the-wild datasets, AffectNet, FER+, and RAF-DB, show the effectiveness of our method.
arXiv Detail & Related papers (2022-10-17T19:25:28Z) - Deep Collaborative Multi-Modal Learning for Unsupervised Kinship
Estimation [53.62256887837659]
Kinship verification is a long-standing research challenge in computer vision.
We propose a novel deep collaborative multi-modal learning (DCML) to integrate the underlying information presented in facial properties.
Our DCML method is always superior to some state-of-the-art kinship verification methods.
arXiv Detail & Related papers (2021-09-07T01:34:51Z) - Exploiting Emotional Dependencies with Graph Convolutional Networks for
Facial Expression Recognition [31.40575057347465]
This paper proposes a novel multi-task learning framework to recognize facial expressions in-the-wild.
A shared feature representation is learned for both discrete and continuous recognition in a MTL setting.
The results of our experiments show that our method outperforms the current state-of-the-art methods on discrete FER.
arXiv Detail & Related papers (2021-06-07T10:20:05Z) - I Only Have Eyes for You: The Impact of Masks On Convolutional-Based
Facial Expression Recognition [78.07239208222599]
We evaluate how the recently proposed FaceChannel adapts towards recognizing facial expressions from persons with masks.
We also perform specific feature-level visualization to demonstrate how the inherent capabilities of the FaceChannel to learn and combine facial features change when in a constrained social interaction scenario.
arXiv Detail & Related papers (2021-04-16T20:03:30Z) - A Multi-resolution Approach to Expression Recognition in the Wild [9.118706387430883]
We propose a multi-resolution approach to solve the Facial Expression Recognition task.
We ground our intuition on the observation that often faces images are acquired at different resolutions.
To our aim, we use a ResNet-like architecture, equipped with Squeeze-and-Excitation blocks, trained on the Affect-in-the-Wild 2 dataset.
arXiv Detail & Related papers (2021-03-09T21:21:02Z) - Facial Expressions as a Vulnerability in Face Recognition [73.85525896663371]
This work explores facial expression bias as a security vulnerability of face recognition systems.
We present a comprehensive analysis of how facial expression bias impacts the performance of face recognition technologies.
arXiv Detail & Related papers (2020-11-17T18:12:41Z) - Introducing Representations of Facial Affect in Automated Multimodal
Deception Detection [18.16596562087374]
Automated deception detection systems can enhance health, justice, and security in society.
This paper presents a novel analysis of the power of dimensional representations of facial affect for automated deception detection.
We used a video dataset of people communicating truthfully or deceptively in real-world, high-stakes courtroom situations.
arXiv Detail & Related papers (2020-08-31T05:12:57Z) - Dual-Attention GAN for Large-Pose Face Frontalization [59.689836951934694]
We present a novel Dual-Attention Generative Adversarial Network (DA-GAN) for photo-realistic face frontalization.
Specifically, a self-attention-based generator is introduced to integrate local features with their long-range dependencies.
A novel face-attention-based discriminator is applied to emphasize local features of face regions.
arXiv Detail & Related papers (2020-02-17T20:00:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.