Text-to-Face Generation with StyleGAN2
- URL: http://arxiv.org/abs/2205.12512v1
- Date: Wed, 25 May 2022 06:02:01 GMT
- Title: Text-to-Face Generation with StyleGAN2
- Authors: D. M. A. Ayanthi and Sarasi Munasinghe
- Abstract summary: We propose a novel framework, to generate facial images that are well-aligned with the input descriptions.
Our framework utilizes the high-resolution face generator, StyleGAN2, and explores the possibility of using it in T2F.
The images generated exhibit a 57% similarity to the ground truth images, with a face semantic distance of 0.92, outperforming state-of-the-artwork.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Synthesizing images from text descriptions has become an active research area
with the advent of Generative Adversarial Networks. The main goal here is to
generate photo-realistic images that are aligned with the input descriptions.
Text-to-Face generation (T2F) is a sub-domain of Text-to-Image generation (T2I)
that is more challenging due to the complexity and variation of facial
attributes. It has a number of applications mainly in the domain of public
safety. Even though several models are available for T2F, there is still the
need to improve the image quality and the semantic alignment. In this research,
we propose a novel framework, to generate facial images that are well-aligned
with the input descriptions. Our framework utilizes the high-resolution face
generator, StyleGAN2, and explores the possibility of using it in T2F. Here, we
embed text in the input latent space of StyleGAN2 using BERT embeddings and
oversee the generation of facial images using text descriptions. We trained our
framework on attribute-based descriptions to generate images of 1024x1024 in
resolution. The images generated exhibit a 57% similarity to the ground truth
images, with a face semantic distance of 0.92, outperforming
state-of-the-artwork. The generated images have a FID score of 118.097 and the
experimental results show that our model generates promising images.
Related papers
- Visual Text Generation in the Wild [67.37458807253064]
We propose a visual text generator (termed SceneVTG) which can produce high-quality text images in the wild.
The proposed SceneVTG significantly outperforms traditional rendering-based methods and recent diffusion-based methods in terms of fidelity and reasonability.
The generated images provide superior utility for tasks involving text detection and text recognition.
arXiv Detail & Related papers (2024-07-19T09:08:20Z) - Paragraph-to-Image Generation with Information-Enriched Diffusion Model [67.9265336953134]
ParaDiffusion is an information-enriched diffusion model for paragraph-to-image generation task.
It delves into the transference of the extensive semantic comprehension capabilities of large language models to the task of image generation.
The code and dataset will be released to foster community research on long-text alignment.
arXiv Detail & Related papers (2023-11-24T05:17:01Z) - Learning to Generate Semantic Layouts for Higher Text-Image
Correspondence in Text-to-Image Synthesis [37.32270579534541]
We propose a novel approach for enhancing text-image correspondence by leveraging available semantic layouts.
Our approach achieves higher text-image correspondence compared to existing text-to-image generation approaches in the Multi-Modal CelebA-HQ and the Cityscapes dataset.
arXiv Detail & Related papers (2023-08-16T05:59:33Z) - LayoutLLM-T2I: Eliciting Layout Guidance from LLM for Text-to-Image
Generation [121.45667242282721]
We propose a coarse-to-fine paradigm to achieve layout planning and image generation.
Our proposed method outperforms the state-of-the-art models in terms of photorealistic layout and image generation.
arXiv Detail & Related papers (2023-08-09T17:45:04Z) - GlyphDiffusion: Text Generation as Image Generation [100.98428068214736]
We propose GlyphDiffusion, a novel diffusion approach for text generation via text-guided image generation.
Our key idea is to render the target text as a glyph image containing visual language content.
Our model also makes significant improvements compared to the recent diffusion model.
arXiv Detail & Related papers (2023-04-25T02:14:44Z) - Scaling Autoregressive Models for Content-Rich Text-to-Image Generation [95.02406834386814]
Parti treats text-to-image generation as a sequence-to-sequence modeling problem.
Parti uses a Transformer-based image tokenizer, ViT-VQGAN, to encode images as sequences of discrete tokens.
PartiPrompts (P2) is a new holistic benchmark of over 1600 English prompts.
arXiv Detail & Related papers (2022-06-22T01:11:29Z) - Photorealistic Text-to-Image Diffusion Models with Deep Language
Understanding [53.170767750244366]
Imagen is a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding.
To assess text-to-image models in greater depth, we introduce DrawBench, a comprehensive and challenging benchmark for text-to-image models.
arXiv Detail & Related papers (2022-05-23T17:42:53Z) - StyleT2F: Generating Human Faces from Textual Description Using
StyleGAN2 [0.0]
StyleT2F is a method of controlling the output of StyleGAN2 using text.
Our method proves to capture the required features correctly and shows consistency between the input text and the output images.
arXiv Detail & Related papers (2022-04-17T04:51:30Z) - OptGAN: Optimizing and Interpreting the Latent Space of the Conditional
Text-to-Image GANs [8.26410341981427]
We study how to ensure that generated samples are believable, realistic or natural.
We present a novel algorithm which identifies semantically-understandable directions in the latent space of a conditional text-to-image GAN architecture.
arXiv Detail & Related papers (2022-02-25T20:00:33Z) - Semantic Text-to-Face GAN -ST^2FG [0.7919810878571298]
We present a novel approach to generate facial images from semantic text descriptions.
For security and criminal identification, the ability to provide a GAN-based system that works like a sketch artist would be incredibly useful.
arXiv Detail & Related papers (2021-07-22T15:42:25Z) - Text to Image Generation with Semantic-Spatial Aware GAN [41.73685713621705]
A text to image generation (T2I) model aims to generate photo-realistic images which are semantically consistent with the text descriptions.
We propose a novel framework Semantic-Spatial Aware GAN, which is trained in an end-to-end fashion so that the text encoder can exploit better text information.
arXiv Detail & Related papers (2021-04-01T15:48:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.