Comprehensive Facial Expression Synthesis using Human-Interpretable
Language
- URL: http://arxiv.org/abs/2007.08154v1
- Date: Thu, 16 Jul 2020 07:28:25 GMT
- Title: Comprehensive Facial Expression Synthesis using Human-Interpretable
Language
- Authors: Joanna Hong, Jung Uk Kim, Sangmin Lee, and Yong Man Ro
- Abstract summary: We propose a new facial expression synthesis model from language-based facial expression description.
Our method can synthesize the facial image with detailed expressions.
In addition, effectively embedding language features on facial features, our method can control individual word to handle each part of facial movement.
- Score: 33.11402372756348
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in facial expression synthesis have shown promising results
using diverse expression representations including facial action units. Facial
action units for an elaborate facial expression synthesis need to be
intuitively represented for human comprehension, not a numeric categorization
of facial action units. To address this issue, we utilize human-friendly
approach: use of natural language where language helps human grasp conceptual
contexts. In this paper, therefore, we propose a new facial expression
synthesis model from language-based facial expression description. Our method
can synthesize the facial image with detailed expressions. In addition,
effectively embedding language features on facial features, our method can
control individual word to handle each part of facial movement. Extensive
qualitative and quantitative evaluations were conducted to verify the
effectiveness of the natural language.
Related papers
- Speech2UnifiedExpressions: Synchronous Synthesis of Co-Speech Affective Face and Body Expressions from Affordable Inputs [67.27840327499625]
We present a multimodal learning-based method to simultaneously synthesize co-speech facial expressions and upper-body gestures for digital characters.
Our approach learns from sparse face landmarks and upper-body joints, estimated directly from video data, to generate plausible emotive character motions.
arXiv Detail & Related papers (2024-06-26T04:53:11Z) - GaFET: Learning Geometry-aware Facial Expression Translation from
In-The-Wild Images [55.431697263581626]
We introduce a novel Geometry-aware Facial Expression Translation framework, which is based on parametric 3D facial representations and can stably decoupled expression.
We achieve higher-quality and more accurate facial expression transfer results compared to state-of-the-art methods, and demonstrate applicability of various poses and complex textures.
arXiv Detail & Related papers (2023-08-07T09:03:35Z) - Emotion Recognition for Challenged People Facial Appearance in Social
using Neural Network [0.0]
Face expression is used in CNN to categorize the acquired picture into different emotion categories.
This paper proposes an idea for face and enlightenment invariant credit of facial expressions by the images.
arXiv Detail & Related papers (2023-05-11T14:38:27Z) - Imitator: Personalized Speech-driven 3D Facial Animation [63.57811510502906]
State-of-the-art methods deform the face topology of the target actor to sync the input audio without considering the identity-specific speaking style and facial idiosyncrasies of the target actor.
We present Imitator, a speech-driven facial expression synthesis method, which learns identity-specific details from a short input video.
We show that our approach produces temporally coherent facial expressions from input audio while preserving the speaking style of the target actors.
arXiv Detail & Related papers (2022-12-30T19:00:02Z) - Interpretable Explainability in Facial Emotion Recognition and
Gamification for Data Collection [0.0]
Training facial emotion recognition models requires large sets of data and costly annotation processes.
We developed a gamified method of acquiring annotated facial emotion data without an explicit labeling effort by humans.
We observed significant improvements in the facial emotion perception and expression skills of the players through repeated game play.
arXiv Detail & Related papers (2022-11-09T09:53:48Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Explore the Expression: Facial Expression Generation using Auxiliary
Classifier Generative Adversarial Network [0.0]
We propose a generative model architecture which robustly generates a set of facial expressions for multiple character identities.
We explore the possibilities of generating complex expressions by combining the simple ones.
arXiv Detail & Related papers (2022-01-22T14:37:13Z) - Learning Facial Representations from the Cycle-consistency of Face [23.23272327438177]
We introduce cycle-consistency in facial characteristics as free supervisory signal to learn facial representations from unlabeled facial images.
The learning is realized by superimposing the facial motion cycle-consistency and identity cycle-consistency constraints.
Our approach is competitive with those of existing methods, demonstrating the rich and unique information embedded in the disentangled representations.
arXiv Detail & Related papers (2021-08-07T11:30:35Z) - LandmarkGAN: Synthesizing Faces from Landmarks [43.53204737135101]
We describe a new method, namely LandmarkGAN, to synthesize faces based on facial landmarks as input.
Our method is able to transform a set of facial landmarks into new faces of different subjects, while retains the same facial expression and orientation.
arXiv Detail & Related papers (2020-10-31T13:27:21Z) - LEED: Label-Free Expression Editing via Disentanglement [57.09545215087179]
LEED framework is capable of editing the expression of both frontal and profile facial images without requiring any expression label.
Two novel losses are designed for optimal expression disentanglement and consistent synthesis.
arXiv Detail & Related papers (2020-07-17T13:36:15Z) - Learning to Augment Expressions for Few-shot Fine-grained Facial
Expression Recognition [98.83578105374535]
We present a novel Fine-grained Facial Expression Database - F2ED.
It includes more than 200k images with 54 facial expressions from 119 persons.
Considering the phenomenon of uneven data distribution and lack of samples is common in real-world scenarios, we evaluate several tasks of few-shot expression learning.
We propose a unified task-driven framework - Compositional Generative Adversarial Network (Comp-GAN) learning to synthesize facial images.
arXiv Detail & Related papers (2020-01-17T03:26:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.