Face Generation from Textual Features using Conditionally Trained Inputs
to Generative Adversarial Networks
- URL: http://arxiv.org/abs/2301.09123v1
- Date: Sun, 22 Jan 2023 13:27:12 GMT
- Title: Face Generation from Textual Features using Conditionally Trained Inputs
to Generative Adversarial Networks
- Authors: Sandeep Shinde, Tejas Pradhan, Aniket Ghorpade, Mihir Tale
- Abstract summary: We use the power of state of the art natural language processing models to convert face descriptions into learnable latent vectors.
The same approach can be tailored to generate any image based on fine grained textual features.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Networks have proved to be extremely effective in image
restoration and reconstruction in the past few years. Generating faces from
textual descriptions is one such application where the power of generative
algorithms can be used. The task of generating faces can be useful for a number
of applications such as finding missing persons, identifying criminals, etc.
This paper discusses a novel approach to generating human faces given a textual
description regarding the facial features. We use the power of state of the art
natural language processing models to convert face descriptions into learnable
latent vectors which are then fed to a generative adversarial network which
generates faces corresponding to those features. While this paper focuses on
high level descriptions of faces only, the same approach can be tailored to
generate any image based on fine grained textual features.
Related papers
- G2Face: High-Fidelity Reversible Face Anonymization via Generative and Geometric Priors [71.69161292330504]
Reversible face anonymization seeks to replace sensitive identity information in facial images with synthesized alternatives.
This paper introduces Gtextsuperscript2Face, which leverages both generative and geometric priors to enhance identity manipulation.
Our method outperforms existing state-of-the-art techniques in face anonymization and recovery, while preserving high data utility.
arXiv Detail & Related papers (2024-08-18T12:36:47Z) - Towards Localized Fine-Grained Control for Facial Expression Generation [54.82883891478555]
Humans, particularly their faces, are central to content generation due to their ability to convey rich expressions and intent.
Current generative models mostly generate flat neutral expressions and characterless smiles without authenticity.
We propose the use of AUs (action units) for facial expression control in face generation.
arXiv Detail & Related papers (2024-07-25T18:29:48Z) - When StyleGAN Meets Stable Diffusion: a $\mathscr{W}_+$ Adapter for
Personalized Image Generation [60.305112612629465]
Text-to-image diffusion models have excelled in producing diverse, high-quality, and photo-realistic images.
We present a novel use of the extended StyleGAN embedding space $mathcalW_+$ to achieve enhanced identity preservation and disentanglement for diffusion models.
Our method adeptly generates personalized text-to-image outputs that are not only compatible with prompt descriptions but also amenable to common StyleGAN editing directions.
arXiv Detail & Related papers (2023-11-29T09:05:14Z) - DreamIdentity: Improved Editability for Efficient Face-identity
Preserved Image Generation [69.16517915592063]
We propose a novel face-identity encoder to learn an accurate representation of human faces.
We also propose self-augmented editability learning to enhance the editability of models.
Our methods can generate identity-preserved images under different scenes at a much faster speed.
arXiv Detail & Related papers (2023-07-01T11:01:17Z) - Face Transformer: Towards High Fidelity and Accurate Face Swapping [54.737909435708936]
Face swapping aims to generate swapped images that fuse the identity of source faces and the attributes of target faces.
This paper presents Face Transformer, a novel face swapping network that can accurately preserve source identities and target attributes simultaneously.
arXiv Detail & Related papers (2023-04-05T15:51:44Z) - StyleT2F: Generating Human Faces from Textual Description Using
StyleGAN2 [0.0]
StyleT2F is a method of controlling the output of StyleGAN2 using text.
Our method proves to capture the required features correctly and shows consistency between the input text and the output images.
arXiv Detail & Related papers (2022-04-17T04:51:30Z) - Semantic Text-to-Face GAN -ST^2FG [0.7919810878571298]
We present a novel approach to generate facial images from semantic text descriptions.
For security and criminal identification, the ability to provide a GAN-based system that works like a sketch artist would be incredibly useful.
arXiv Detail & Related papers (2021-07-22T15:42:25Z) - One-shot Face Reenactment Using Appearance Adaptive Normalization [30.615671641713945]
The paper proposes a novel generative adversarial network for one-shot face reenactment.
It can animate a single face image to a different pose-and-expression while keeping its original appearance.
arXiv Detail & Related papers (2021-02-08T03:36:30Z) - Faces \`a la Carte: Text-to-Face Generation via Attribute
Disentanglement [9.10088750358281]
Text-to-Face (TTF) is a challenging task with great potential for diverse computer vision applications.
We propose a Text-to-Face model that produces images in high resolution (1024x1024) with text-to-image consistency.
We refer to our model as TTF-HD. Experimental results show that TTF-HD generates high-quality faces with state-of-the-art performance.
arXiv Detail & Related papers (2020-06-13T10:24:31Z) - Exploiting Semantics for Face Image Deblurring [121.44928934662063]
We propose an effective and efficient face deblurring algorithm by exploiting semantic cues via deep convolutional neural networks.
We incorporate face semantic labels as input priors and propose an adaptive structural loss to regularize facial local structures.
The proposed method restores sharp images with more accurate facial features and details.
arXiv Detail & Related papers (2020-01-19T13:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.