RAF-AU Database: In-the-Wild Facial Expressions with Subjective Emotion
Judgement and Objective AU Annotations
- URL: http://arxiv.org/abs/2008.05196v3
- Date: Mon, 28 Sep 2020 07:20:14 GMT
- Title: RAF-AU Database: In-the-Wild Facial Expressions with Subjective Emotion
Judgement and Objective AU Annotations
- Authors: Wenjing Yan, Shan Li, Chengtao Que, JiQuan Pei, Weihong Deng
- Abstract summary: We develop a RAF-AU database that employs a sign-based (i.e., AUs) and judgement-based (i.e., perceived emotion) approach to annotating blended facial expressions in the wild.
We also conduct a preliminary investigation of which key AUs contribute most to a perceived emotion, and the relationship between AUs and facial expressions.
- Score: 36.93475723886278
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Much of the work on automatic facial expression recognition relies on
databases containing a certain number of emotion classes and their exaggerated
facial configurations (generally six prototypical facial expressions), based on
Ekman's Basic Emotion Theory. However, recent studies have revealed that facial
expressions in our human life can be blended with multiple basic emotions. And
the emotion labels for these in-the-wild facial expressions cannot easily be
annotated solely on pre-defined AU patterns. How to analyze the action units
for such complex expressions is still an open question. To address this issue,
we develop a RAF-AU database that employs a sign-based (i.e., AUs) and
judgement-based (i.e., perceived emotion) approach to annotating blended facial
expressions in the wild. We first reviewed the annotation methods in existing
databases and identified crowdsourcing as a promising strategy for labeling
in-the-wild facial expressions. Then, RAF-AU was finely annotated by
experienced coders, on which we also conducted a preliminary investigation of
which key AUs contribute most to a perceived emotion, and the relationship
between AUs and facial expressions. Finally, we provided a baseline for AU
recognition in RAF-AU using popular features and multi-label learning methods.
Related papers
- Interpretable Explainability in Facial Emotion Recognition and
Gamification for Data Collection [0.0]
Training facial emotion recognition models requires large sets of data and costly annotation processes.
We developed a gamified method of acquiring annotated facial emotion data without an explicit labeling effort by humans.
We observed significant improvements in the facial emotion perception and expression skills of the players through repeated game play.
arXiv Detail & Related papers (2022-11-09T09:53:48Z) - MAFW: A Large-scale, Multi-modal, Compound Affective Database for
Dynamic Facial Expression Recognition in the Wild [56.61912265155151]
We propose MAFW, a large-scale compound affective database with 10,045 video-audio clips in the wild.
Each clip is annotated with a compound emotional category and a couple of sentences that describe the subjects' affective behaviors in the clip.
For the compound emotion annotation, each clip is categorized into one or more of the 11 widely-used emotions, i.e., anger, disgust, fear, happiness, neutral, sadness, surprise, contempt, anxiety, helplessness, and disappointment.
arXiv Detail & Related papers (2022-08-01T13:34:33Z) - Emotion Separation and Recognition from a Facial Expression by
Generating the Poker Face with Vision Transformers [57.67586172996843]
We propose a novel FER model, called Poker Face Vision Transformer or PF-ViT, to separate and recognize the disturbance-agnostic emotion from a static facial image.
PF-ViT generates its corresponding poker face without the need for paired images.
arXiv Detail & Related papers (2022-07-22T13:39:06Z) - The Role of Facial Expressions and Emotion in ASL [4.686078698204789]
We find many relationships between emotionality and the face in American Sign Language.
A simple classifier can predict what someone is saying in terms of broad emotional categories only by looking at the face.
arXiv Detail & Related papers (2022-01-19T23:11:48Z) - When Facial Expression Recognition Meets Few-Shot Learning: A Joint and
Alternate Learning Framework [60.51225419301642]
We propose an Emotion Guided Similarity Network (EGS-Net) to address the diversity of human emotions in practical scenarios.
EGS-Net consists of an emotion branch and a similarity branch, based on a two-stage learning framework.
Experimental results on both in-the-lab and in-the-wild compound expression datasets demonstrate the superiority of our proposed method against several state-of-the-art methods.
arXiv Detail & Related papers (2022-01-18T07:24:12Z) - Multi-Cue Adaptive Emotion Recognition Network [4.570705738465714]
We propose a new deep learning approach for emotion recognition based on adaptive multi-cues.
We compare the proposed approach with the state-of-art approaches in the CAER-S dataset.
arXiv Detail & Related papers (2021-11-03T15:08:55Z) - AU-Expression Knowledge Constrained Representation Learning for Facial
Expression Recognition [79.8779790682205]
We propose an AU-Expression Knowledge Constrained Representation Learning (AUE-CRL) framework to learn the AU representations without AU annotations and adaptively use representations to facilitate facial expression recognition.
We conduct experiments on the challenging uncontrolled datasets to demonstrate the superiority of the proposed framework over current state-of-the-art methods.
arXiv Detail & Related papers (2020-12-29T03:42:04Z) - Learning to Augment Expressions for Few-shot Fine-grained Facial
Expression Recognition [98.83578105374535]
We present a novel Fine-grained Facial Expression Database - F2ED.
It includes more than 200k images with 54 facial expressions from 119 persons.
Considering the phenomenon of uneven data distribution and lack of samples is common in real-world scenarios, we evaluate several tasks of few-shot expression learning.
We propose a unified task-driven framework - Compositional Generative Adversarial Network (Comp-GAN) learning to synthesize facial images.
arXiv Detail & Related papers (2020-01-17T03:26:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.