Facial Expression Editing with Continuous Emotion Labels
- URL: http://arxiv.org/abs/2006.12210v1
- Date: Mon, 22 Jun 2020 13:03:02 GMT
- Title: Facial Expression Editing with Continuous Emotion Labels
- Authors: Alexandra Lindt, Pablo Barros, Henrique Siqueira and Stefan Wermter
- Abstract summary: Deep generative models have achieved impressive results in the field of automated facial expression editing.
We propose a model that can be used to manipulate facial expressions in facial images according to continuous two-dimensional emotion labels.
- Score: 76.36392210528105
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently deep generative models have achieved impressive results in the field
of automated facial expression editing. However, the approaches presented so
far presume a discrete representation of human emotions and are therefore
limited in the modelling of non-discrete emotional expressions. To overcome
this limitation, we explore how continuous emotion representations can be used
to control automated expression editing. We propose a deep generative model
that can be used to manipulate facial expressions in facial images according to
continuous two-dimensional emotion labels. One dimension represents an
emotion's valence, the other represents its degree of arousal. We demonstrate
the functionality of our model with a quantitative analysis using classifier
networks as well as with a qualitative analysis.
Related papers
- Emotional Images: Assessing Emotions in Images and Potential Biases in Generative Models [0.0]
This paper examines potential biases and inconsistencies in emotional evocation of images produced by generative artificial intelligence (AI) models.
We compare the emotions evoked by an AI-produced image to the emotions evoked by prompts used to create those images.
Findings indicate that AI-generated images frequently lean toward negative emotional content, regardless of the original prompt.
arXiv Detail & Related papers (2024-11-08T21:42:50Z) - Towards Localized Fine-Grained Control for Facial Expression Generation [54.82883891478555]
Humans, particularly their faces, are central to content generation due to their ability to convey rich expressions and intent.
Current generative models mostly generate flat neutral expressions and characterless smiles without authenticity.
We propose the use of AUs (action units) for facial expression control in face generation.
arXiv Detail & Related papers (2024-07-25T18:29:48Z) - A Unified and Interpretable Emotion Representation and Expression Generation [38.321248253111776]
We propose an interpretable and unified emotion model, referred as C2A2.
We show that our generated images are rich and capture subtle expressions.
arXiv Detail & Related papers (2024-04-01T17:03:29Z) - EmoTalker: Emotionally Editable Talking Face Generation via Diffusion
Model [39.14430238946951]
EmoTalker is an emotionally editable portraits animation approach based on the diffusion model.
Emotion Intensity Block is introduced to analyze fine-grained emotions and strengths derived from prompts.
Experiments show the effectiveness of EmoTalker in generating high-quality, emotionally customizable facial expressions.
arXiv Detail & Related papers (2024-01-16T02:02:44Z) - Fine-Grained Emotion Prediction by Modeling Emotion Definitions [26.098917459551167]
We propose a new framework for fine-grained emotion prediction in the text through emotion definition modeling.
Our models outperform existing state-of-the-art for fine-grained emotion dataset GoEmotions.
arXiv Detail & Related papers (2021-07-26T12:11:18Z) - Enhancing Cognitive Models of Emotions with Representation Learning [58.2386408470585]
We present a novel deep learning-based framework to generate embedding representations of fine-grained emotions.
Our framework integrates a contextualized embedding encoder with a multi-head probing model.
Our model is evaluated on the Empathetic Dialogue dataset and shows the state-of-the-art result for classifying 32 emotions.
arXiv Detail & Related papers (2021-04-20T16:55:15Z) - Emotion pattern detection on facial videos using functional statistics [62.997667081978825]
We propose a technique based on Functional ANOVA to extract significant patterns of face muscles movements.
We determine if there are time-related differences on expressions among emotional groups by using a functional F-test.
arXiv Detail & Related papers (2021-03-01T08:31:08Z) - Modality-Transferable Emotion Embeddings for Low-Resource Multimodal
Emotion Recognition [55.44502358463217]
We propose a modality-transferable model with emotion embeddings to tackle the aforementioned issues.
Our model achieves state-of-the-art performance on most of the emotion categories.
Our model also outperforms existing baselines in the zero-shot and few-shot scenarios for unseen emotions.
arXiv Detail & Related papers (2020-09-21T06:10:39Z) - LEED: Label-Free Expression Editing via Disentanglement [57.09545215087179]
LEED framework is capable of editing the expression of both frontal and profile facial images without requiring any expression label.
Two novel losses are designed for optimal expression disentanglement and consistent synthesis.
arXiv Detail & Related papers (2020-07-17T13:36:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.