The Role of Facial Expressions and Emotion in ASL
- URL: http://arxiv.org/abs/2201.07906v1
- Date: Wed, 19 Jan 2022 23:11:48 GMT
- Title: The Role of Facial Expressions and Emotion in ASL
- Authors: Lee Kezar, Pei Zhou
- Abstract summary: We find many relationships between emotionality and the face in American Sign Language.
A simple classifier can predict what someone is saying in terms of broad emotional categories only by looking at the face.
- Score: 4.686078698204789
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is little prior work on quantifying the relationships between facial
expressions and emotionality in American Sign Language. In this final report,
we provide two methods for studying these relationships through probability and
prediction. Using a large corpus of natural signing manually annotated with
facial features paired with lexical emotion datasets, we find that there exist
many relationships between emotionality and the face, and that a simple
classifier can predict what someone is saying in terms of broad emotional
categories only by looking at the face.
Related papers
- Face Emotion Recognization Using Dataset Augmentation Based on Neural
Network [0.0]
Facial expression is one of the most external indications of a person's feelings and emotions.
It plays an important role in coordinating interpersonal relationships.
As a branch of the field of analyzing sentiment, facial expression recognition offers broad application prospects.
arXiv Detail & Related papers (2022-10-23T10:21:45Z) - When Facial Expression Recognition Meets Few-Shot Learning: A Joint and
Alternate Learning Framework [60.51225419301642]
We propose an Emotion Guided Similarity Network (EGS-Net) to address the diversity of human emotions in practical scenarios.
EGS-Net consists of an emotion branch and a similarity branch, based on a two-stage learning framework.
Experimental results on both in-the-lab and in-the-wild compound expression datasets demonstrate the superiority of our proposed method against several state-of-the-art methods.
arXiv Detail & Related papers (2022-01-18T07:24:12Z) - Emotion Recognition from Multiple Modalities: Fundamentals and
Methodologies [106.62835060095532]
We discuss several key aspects of multi-modal emotion recognition (MER)
We begin with a brief introduction on widely used emotion representation models and affective modalities.
We then summarize existing emotion annotation strategies and corresponding computational tasks.
Finally, we outline several real-world applications and discuss some future directions.
arXiv Detail & Related papers (2021-08-18T21:55:20Z) - Emotion Recognition under Consideration of the Emotion Component Process
Model [9.595357496779394]
We use the emotion component process model (CPM) by Scherer (2005) to explain emotion communication.
CPM states that emotions are a coordinated process of various subcomponents, in reaction to an event, namely the subjective feeling, the cognitive appraisal, the expression, a physiological bodily reaction, and a motivational action tendency.
We find that emotions on Twitter are predominantly expressed by event descriptions or subjective reports of the feeling, while in literature, authors prefer to describe what characters do, and leave the interpretation to the reader.
arXiv Detail & Related papers (2021-07-27T15:53:25Z) - A Circular-Structured Representation for Visual Emotion Distribution
Learning [82.89776298753661]
We propose a well-grounded circular-structured representation to utilize the prior knowledge for visual emotion distribution learning.
To be specific, we first construct an Emotion Circle to unify any emotional state within it.
On the proposed Emotion Circle, each emotion distribution is represented with an emotion vector, which is defined with three attributes.
arXiv Detail & Related papers (2021-06-23T14:53:27Z) - Emotion pattern detection on facial videos using functional statistics [62.997667081978825]
We propose a technique based on Functional ANOVA to extract significant patterns of face muscles movements.
We determine if there are time-related differences on expressions among emotional groups by using a functional F-test.
arXiv Detail & Related papers (2021-03-01T08:31:08Z) - Learning Emotional-Blinded Face Representations [77.7653702071127]
We propose two face representations that are blind to facial expressions associated to emotional responses.
This work is motivated by new international regulations for personal data protection.
arXiv Detail & Related papers (2020-09-18T09:24:10Z) - RAF-AU Database: In-the-Wild Facial Expressions with Subjective Emotion
Judgement and Objective AU Annotations [36.93475723886278]
We develop a RAF-AU database that employs a sign-based (i.e., AUs) and judgement-based (i.e., perceived emotion) approach to annotating blended facial expressions in the wild.
We also conduct a preliminary investigation of which key AUs contribute most to a perceived emotion, and the relationship between AUs and facial expressions.
arXiv Detail & Related papers (2020-08-12T09:29:16Z) - Comprehensive Facial Expression Synthesis using Human-Interpretable
Language [33.11402372756348]
We propose a new facial expression synthesis model from language-based facial expression description.
Our method can synthesize the facial image with detailed expressions.
In addition, effectively embedding language features on facial features, our method can control individual word to handle each part of facial movement.
arXiv Detail & Related papers (2020-07-16T07:28:25Z) - Facial Expression Editing with Continuous Emotion Labels [76.36392210528105]
Deep generative models have achieved impressive results in the field of automated facial expression editing.
We propose a model that can be used to manipulate facial expressions in facial images according to continuous two-dimensional emotion labels.
arXiv Detail & Related papers (2020-06-22T13:03:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.