Explore the Expression: Facial Expression Generation using Auxiliary
Classifier Generative Adversarial Network
- URL: http://arxiv.org/abs/2201.09061v1
- Date: Sat, 22 Jan 2022 14:37:13 GMT
- Title: Explore the Expression: Facial Expression Generation using Auxiliary
Classifier Generative Adversarial Network
- Authors: J. Rafid Siddiqui
- Abstract summary: We propose a generative model architecture which robustly generates a set of facial expressions for multiple character identities.
We explore the possibilities of generating complex expressions by combining the simple ones.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Facial expressions are a form of non-verbal communication that humans perform
seamlessly for meaningful transfer of information. Most of the literature
addresses the facial expression recognition aspect however, with the advent of
Generative Models, it has become possible to explore the affect space in
addition to mere classification of a set of expressions. In this article, we
propose a generative model architecture which robustly generates a set of
facial expressions for multiple character identities and explores the
possibilities of generating complex expressions by combining the simple ones.
Related papers
- Knowledge-Enhanced Facial Expression Recognition with Emotional-to-Neutral Transformation [66.53435569574135]
Existing facial expression recognition methods typically fine-tune a pre-trained visual encoder using discrete labels.
We observe that the rich knowledge in text embeddings, generated by vision-language models, is a promising alternative for learning discriminative facial expression representations.
We propose a novel knowledge-enhanced FER method with an emotional-to-neutral transformation.
arXiv Detail & Related papers (2024-09-13T07:28:57Z) - Towards Localized Fine-Grained Control for Facial Expression Generation [54.82883891478555]
Humans, particularly their faces, are central to content generation due to their ability to convey rich expressions and intent.
Current generative models mostly generate flat neutral expressions and characterless smiles without authenticity.
We propose the use of AUs (action units) for facial expression control in face generation.
arXiv Detail & Related papers (2024-07-25T18:29:48Z) - Towards a Simultaneous and Granular Identity-Expression Control in Personalized Face Generation [34.72612800373437]
In human-centric content generation, pre-trained text-to-image models struggle to produce user-wanted portrait images.
We propose a novel multi-modal face generation framework, capable of simultaneous identity-expression control and more fine-grained expression synthesis.
arXiv Detail & Related papers (2024-01-02T13:28:39Z) - GaFET: Learning Geometry-aware Facial Expression Translation from
In-The-Wild Images [55.431697263581626]
We introduce a novel Geometry-aware Facial Expression Translation framework, which is based on parametric 3D facial representations and can stably decoupled expression.
We achieve higher-quality and more accurate facial expression transfer results compared to state-of-the-art methods, and demonstrate applicability of various poses and complex textures.
arXiv Detail & Related papers (2023-08-07T09:03:35Z) - Face Generation from Textual Features using Conditionally Trained Inputs
to Generative Adversarial Networks [0.0]
We use the power of state of the art natural language processing models to convert face descriptions into learnable latent vectors.
The same approach can be tailored to generate any image based on fine grained textual features.
arXiv Detail & Related papers (2023-01-22T13:27:12Z) - Comprehensive Facial Expression Synthesis using Human-Interpretable
Language [33.11402372756348]
We propose a new facial expression synthesis model from language-based facial expression description.
Our method can synthesize the facial image with detailed expressions.
In addition, effectively embedding language features on facial features, our method can control individual word to handle each part of facial movement.
arXiv Detail & Related papers (2020-07-16T07:28:25Z) - Facial Expression Editing with Continuous Emotion Labels [76.36392210528105]
Deep generative models have achieved impressive results in the field of automated facial expression editing.
We propose a model that can be used to manipulate facial expressions in facial images according to continuous two-dimensional emotion labels.
arXiv Detail & Related papers (2020-06-22T13:03:02Z) - InterFaceGAN: Interpreting the Disentangled Face Representation Learned
by GANs [73.27299786083424]
We propose a framework called InterFaceGAN to interpret the disentangled face representation learned by state-of-the-art GAN models.
We first find that GANs learn various semantics in some linear subspaces of the latent space.
We then conduct a detailed study on the correlation between different semantics and manage to better disentangle them via subspace projection.
arXiv Detail & Related papers (2020-05-18T18:01:22Z) - Learning to Augment Expressions for Few-shot Fine-grained Facial
Expression Recognition [98.83578105374535]
We present a novel Fine-grained Facial Expression Database - F2ED.
It includes more than 200k images with 54 facial expressions from 119 persons.
Considering the phenomenon of uneven data distribution and lack of samples is common in real-world scenarios, we evaluate several tasks of few-shot expression learning.
We propose a unified task-driven framework - Compositional Generative Adversarial Network (Comp-GAN) learning to synthesize facial images.
arXiv Detail & Related papers (2020-01-17T03:26:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.