Revisiting Self-Supervised Contrastive Learning for Facial Expression
Recognition
- URL: http://arxiv.org/abs/2210.03853v1
- Date: Sat, 8 Oct 2022 00:04:27 GMT
- Title: Revisiting Self-Supervised Contrastive Learning for Facial Expression
Recognition
- Authors: Yuxuan Shu and Xiao Gu and Guang-Zhong Yang and Benny Lo
- Abstract summary: We revisit the use of self-supervised contrastive learning and explore three core strategies to enforce expression-specific representations.
Experimental results show that our proposed method outperforms the current state-of-the-art self-supervised learning methods.
- Score: 39.647301516599505
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The success of most advanced facial expression recognition works relies
heavily on large-scale annotated datasets. However, it poses great challenges
in acquiring clean and consistent annotations for facial expression datasets.
On the other hand, self-supervised contrastive learning has gained great
popularity due to its simple yet effective instance discrimination training
strategy, which can potentially circumvent the annotation issue. Nevertheless,
there remain inherent disadvantages of instance-level discrimination, which are
even more challenging when faced with complicated facial representations. In
this paper, we revisit the use of self-supervised contrastive learning and
explore three core strategies to enforce expression-specific representations
and to minimize the interference from other facial attributes, such as identity
and face styling. Experimental results show that our proposed method
outperforms the current state-of-the-art self-supervised learning methods, in
terms of both categorical and dimensional facial expression recognition tasks.
Related papers
- Emotic Masked Autoencoder with Attention Fusion for Facial Expression Recognition [1.4374467687356276]
This paper presents an innovative approach integrating the MAE-Face self-supervised learning (SSL) method and multi-view Fusion Attention mechanism for expression classification.
We suggest easy-to-implement and no-training frameworks aimed at highlighting key facial features to determine if such features can serve as guides for the model.
The efficacy of this method is validated by improvements in model performance on the Aff-wild2 dataset.
arXiv Detail & Related papers (2024-03-19T16:21:47Z) - A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Weakly-Supervised Text-driven Contrastive Learning for Facial Behavior
Understanding [12.509298933267221]
We introduce a two-stage Contrastive Learning with Text-Embeded framework for Facial behavior understanding.
The first stage is a weakly-supervised contrastive learning method that learns representations from positive-negative pairs constructed using coarse-grained activity information.
The second stage aims to train the recognition of facial expressions or facial action units by maximizing the similarity between image and the corresponding text label names.
arXiv Detail & Related papers (2023-03-31T18:21:09Z) - Pose-disentangled Contrastive Learning for Self-supervised Facial
Representation [12.677909048435408]
We propose a novel Pose-disentangled Contrastive Learning (PCL) method for general self-supervised facial representation.
Our PCL first devises a pose-disentangled decoder (PDD), which disentangles the pose-related features from the face-aware features.
We then introduce a pose-related contrastive learning scheme that learns pose-related information based on data augmentation of the same image.
arXiv Detail & Related papers (2022-11-24T09:30:51Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Heterogeneous Contrastive Learning: Encoding Spatial Information for
Compact Visual Representations [183.03278932562438]
This paper presents an effective approach that adds spatial information to the encoding stage to alleviate the learning inconsistency between the contrastive objective and strong data augmentation operations.
We show that our approach achieves higher efficiency in visual representations and thus delivers a key message to inspire the future research of self-supervised visual representation learning.
arXiv Detail & Related papers (2020-11-19T16:26:25Z) - Facial Expressions as a Vulnerability in Face Recognition [73.85525896663371]
This work explores facial expression bias as a security vulnerability of face recognition systems.
We present a comprehensive analysis of how facial expression bias impacts the performance of face recognition technologies.
arXiv Detail & Related papers (2020-11-17T18:12:41Z) - Deep Multi-task Learning for Facial Expression Recognition and Synthesis
Based on Selective Feature Sharing [28.178390846446938]
We propose a novel selective feature-sharing method, and establish a multi-task network for facial expression recognition and facial expression synthesis.
The proposed method can effectively transfer beneficial features between different tasks, while filtering out useless and harmful information.
Experimental results show that the proposed method achieves state-of-the-art performance on those commonly used facial expression recognition benchmarks.
arXiv Detail & Related papers (2020-07-09T02:29:34Z) - Fairness by Learning Orthogonal Disentangled Representations [50.82638766862974]
We propose a novel disentanglement approach to invariant representation problem.
We enforce the meaningful representation to be agnostic to sensitive information by entropy.
The proposed approach is evaluated on five publicly available datasets.
arXiv Detail & Related papers (2020-03-12T11:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.