RAF-AU Database: In-the-Wild Facial Expressions with Subjective Emotion
Judgement and Objective AU Annotations
- URL: http://arxiv.org/abs/2008.05196v3
- Date: Mon, 28 Sep 2020 07:20:14 GMT
- Title: RAF-AU Database: In-the-Wild Facial Expressions with Subjective Emotion
Judgement and Objective AU Annotations
- Authors: Wenjing Yan, Shan Li, Chengtao Que, JiQuan Pei, Weihong Deng
- Abstract summary: We develop a RAF-AU database that employs a sign-based (i.e., AUs) and judgement-based (i.e., perceived emotion) approach to annotating blended facial expressions in the wild.
We also conduct a preliminary investigation of which key AUs contribute most to a perceived emotion, and the relationship between AUs and facial expressions.
- Score: 36.93475723886278
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Much of the work on automatic facial expression recognition relies on
databases containing a certain number of emotion classes and their exaggerated
facial configurations (generally six prototypical facial expressions), based on
Ekman's Basic Emotion Theory. However, recent studies have revealed that facial
expressions in our human life can be blended with multiple basic emotions. And
the emotion labels for these in-the-wild facial expressions cannot easily be
annotated solely on pre-defined AU patterns. How to analyze the action units
for such complex expressions is still an open question. To address this issue,
we develop a RAF-AU database that employs a sign-based (i.e., AUs) and
judgement-based (i.e., perceived emotion) approach to annotating blended facial
expressions in the wild. We first reviewed the annotation methods in existing
databases and identified crowdsourcing as a promising strategy for labeling
in-the-wild facial expressions. Then, RAF-AU was finely annotated by
experienced coders, on which we also conducted a preliminary investigation of
which key AUs contribute most to a perceived emotion, and the relationship
between AUs and facial expressions. Finally, we provided a baseline for AU
recognition in RAF-AU using popular features and multi-label learning methods.
Related papers
- ExpLLM: Towards Chain of Thought for Facial Expression Recognition [61.49849866937758]
We propose a novel method called ExpLLM to generate an accurate chain of thought (CoT) for facial expression recognition.
Specifically, we have designed the CoT mechanism from three key perspectives: key observations, overall emotional interpretation, and conclusion.
In experiments on the RAF-DB and AffectNet datasets, ExpLLM outperforms current state-of-the-art FER methods.
arXiv Detail & Related papers (2024-09-04T15:50:16Z) - How Do You Perceive My Face? Recognizing Facial Expressions in Multi-Modal Context by Modeling Mental Representations [5.895694050664867]
We introduce a novel approach for facial expression classification that goes beyond simple classification tasks.
Our model accurately classifies a perceived face and synthesizes the corresponding mental representation perceived by a human when observing a face in context.
We evaluate synthesized expressions in a human study, showing that our model effectively produces approximations of human mental representations.
arXiv Detail & Related papers (2024-09-04T09:32:40Z) - Interpretable Explainability in Facial Emotion Recognition and
Gamification for Data Collection [0.0]
Training facial emotion recognition models requires large sets of data and costly annotation processes.
We developed a gamified method of acquiring annotated facial emotion data without an explicit labeling effort by humans.
We observed significant improvements in the facial emotion perception and expression skills of the players through repeated game play.
arXiv Detail & Related papers (2022-11-09T09:53:48Z) - MAFW: A Large-scale, Multi-modal, Compound Affective Database for
Dynamic Facial Expression Recognition in the Wild [56.61912265155151]
We propose MAFW, a large-scale compound affective database with 10,045 video-audio clips in the wild.
Each clip is annotated with a compound emotional category and a couple of sentences that describe the subjects' affective behaviors in the clip.
For the compound emotion annotation, each clip is categorized into one or more of the 11 widely-used emotions, i.e., anger, disgust, fear, happiness, neutral, sadness, surprise, contempt, anxiety, helplessness, and disappointment.
arXiv Detail & Related papers (2022-08-01T13:34:33Z) - Emotion Separation and Recognition from a Facial Expression by Generating the Poker Face with Vision Transformers [57.1091606948826]
We propose a novel FER model, named Poker Face Vision Transformer or PF-ViT, to address these challenges.
PF-ViT aims to separate and recognize the disturbance-agnostic emotion from a static facial image via generating its corresponding poker face.
PF-ViT utilizes vanilla Vision Transformers, and its components are pre-trained as Masked Autoencoders on a large facial expression dataset.
arXiv Detail & Related papers (2022-07-22T13:39:06Z) - The Role of Facial Expressions and Emotion in ASL [4.686078698204789]
We find many relationships between emotionality and the face in American Sign Language.
A simple classifier can predict what someone is saying in terms of broad emotional categories only by looking at the face.
arXiv Detail & Related papers (2022-01-19T23:11:48Z) - When Facial Expression Recognition Meets Few-Shot Learning: A Joint and
Alternate Learning Framework [60.51225419301642]
We propose an Emotion Guided Similarity Network (EGS-Net) to address the diversity of human emotions in practical scenarios.
EGS-Net consists of an emotion branch and a similarity branch, based on a two-stage learning framework.
Experimental results on both in-the-lab and in-the-wild compound expression datasets demonstrate the superiority of our proposed method against several state-of-the-art methods.
arXiv Detail & Related papers (2022-01-18T07:24:12Z) - AU-Expression Knowledge Constrained Representation Learning for Facial
Expression Recognition [79.8779790682205]
We propose an AU-Expression Knowledge Constrained Representation Learning (AUE-CRL) framework to learn the AU representations without AU annotations and adaptively use representations to facilitate facial expression recognition.
We conduct experiments on the challenging uncontrolled datasets to demonstrate the superiority of the proposed framework over current state-of-the-art methods.
arXiv Detail & Related papers (2020-12-29T03:42:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.