Explore the Expression: Facial Expression Generation using Auxiliary
Classifier Generative Adversarial Network
- URL: http://arxiv.org/abs/2201.09061v1
- Date: Sat, 22 Jan 2022 14:37:13 GMT
- Title: Explore the Expression: Facial Expression Generation using Auxiliary
Classifier Generative Adversarial Network
- Authors: J. Rafid Siddiqui
- Abstract summary: We propose a generative model architecture which robustly generates a set of facial expressions for multiple character identities.
We explore the possibilities of generating complex expressions by combining the simple ones.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Facial expressions are a form of non-verbal communication that humans perform
seamlessly for meaningful transfer of information. Most of the literature
addresses the facial expression recognition aspect however, with the advent of
Generative Models, it has become possible to explore the affect space in
addition to mere classification of a set of expressions. In this article, we
propose a generative model architecture which robustly generates a set of
facial expressions for multiple character identities and explores the
possibilities of generating complex expressions by combining the simple ones.
Related papers
- Towards a Simultaneous and Granular Identity-Expression Control in Personalized Face Generation [34.72612800373437]
In human-centric content generation, pre-trained text-to-image models struggle to produce user-wanted portrait images.
We propose a novel multi-modal face generation framework, capable of simultaneous identity-expression control and more fine-grained expression synthesis.
arXiv Detail & Related papers (2024-01-02T13:28:39Z) - ExpCLIP: Bridging Text and Facial Expressions via Semantic Alignment [5.516575655881858]
We introduce a technique that enables the control of arbitrary styles by leveraging natural language as emotion prompts.
Our method accomplishes expressive facial animation generation and offers enhanced flexibility in effectively conveying the desired style.
arXiv Detail & Related papers (2023-08-28T09:35:13Z) - GaFET: Learning Geometry-aware Facial Expression Translation from
In-The-Wild Images [55.431697263581626]
We introduce a novel Geometry-aware Facial Expression Translation framework, which is based on parametric 3D facial representations and can stably decoupled expression.
We achieve higher-quality and more accurate facial expression transfer results compared to state-of-the-art methods, and demonstrate applicability of various poses and complex textures.
arXiv Detail & Related papers (2023-08-07T09:03:35Z) - Face Generation from Textual Features using Conditionally Trained Inputs
to Generative Adversarial Networks [0.0]
We use the power of state of the art natural language processing models to convert face descriptions into learnable latent vectors.
The same approach can be tailored to generate any image based on fine grained textual features.
arXiv Detail & Related papers (2023-01-22T13:27:12Z) - Emotion Separation and Recognition from a Facial Expression by
Generating the Poker Face with Vision Transformers [57.67586172996843]
We propose a novel FER model, called Poker Face Vision Transformer or PF-ViT, to separate and recognize the disturbance-agnostic emotion from a static facial image.
PF-ViT generates its corresponding poker face without the need for paired images.
arXiv Detail & Related papers (2022-07-22T13:39:06Z) - LEED: Label-Free Expression Editing via Disentanglement [57.09545215087179]
LEED framework is capable of editing the expression of both frontal and profile facial images without requiring any expression label.
Two novel losses are designed for optimal expression disentanglement and consistent synthesis.
arXiv Detail & Related papers (2020-07-17T13:36:15Z) - Comprehensive Facial Expression Synthesis using Human-Interpretable
Language [33.11402372756348]
We propose a new facial expression synthesis model from language-based facial expression description.
Our method can synthesize the facial image with detailed expressions.
In addition, effectively embedding language features on facial features, our method can control individual word to handle each part of facial movement.
arXiv Detail & Related papers (2020-07-16T07:28:25Z) - Facial Expression Editing with Continuous Emotion Labels [76.36392210528105]
Deep generative models have achieved impressive results in the field of automated facial expression editing.
We propose a model that can be used to manipulate facial expressions in facial images according to continuous two-dimensional emotion labels.
arXiv Detail & Related papers (2020-06-22T13:03:02Z) - InterFaceGAN: Interpreting the Disentangled Face Representation Learned
by GANs [73.27299786083424]
We propose a framework called InterFaceGAN to interpret the disentangled face representation learned by state-of-the-art GAN models.
We first find that GANs learn various semantics in some linear subspaces of the latent space.
We then conduct a detailed study on the correlation between different semantics and manage to better disentangle them via subspace projection.
arXiv Detail & Related papers (2020-05-18T18:01:22Z) - Learning to Augment Expressions for Few-shot Fine-grained Facial
Expression Recognition [98.83578105374535]
We present a novel Fine-grained Facial Expression Database - F2ED.
It includes more than 200k images with 54 facial expressions from 119 persons.
Considering the phenomenon of uneven data distribution and lack of samples is common in real-world scenarios, we evaluate several tasks of few-shot expression learning.
We propose a unified task-driven framework - Compositional Generative Adversarial Network (Comp-GAN) learning to synthesize facial images.
arXiv Detail & Related papers (2020-01-17T03:26:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.