Multi-Attributed and Structured Text-to-Face Synthesis
- URL: http://arxiv.org/abs/2108.11100v1
- Date: Wed, 25 Aug 2021 07:52:21 GMT
- Title: Multi-Attributed and Structured Text-to-Face Synthesis
- Authors: Rohan Wadhawan, Tanuj Drall, Shubham Singh, Shampa Chakraverty
- Abstract summary: Generative Adrial Networks (GANs) have revolutionized image synthesis through many applications like face generation, photograph editing, and image super-resolution.
This paper empirically proves that increasing the number of facial attributes in each textual description helps GANs generate more diverse and real-looking faces.
- Score: 1.3381749415517017
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Adversarial Networks (GANs) have revolutionized image synthesis
through many applications like face generation, photograph editing, and image
super-resolution. Image synthesis using GANs has predominantly been uni-modal,
with few approaches that can synthesize images from text or other data modes.
Text-to-image synthesis, especially text-to-face synthesis, has promising use
cases of robust face-generation from eye witness accounts and augmentation of
the reading experience with visual cues. However, only a couple of datasets
provide consolidated face data and textual descriptions for text-to-face
synthesis. Moreover, these textual annotations are less extensive and
descriptive, which reduces the diversity of faces generated from it. This paper
empirically proves that increasing the number of facial attributes in each
textual description helps GANs generate more diverse and real-looking faces. To
prove this, we propose a new methodology that focuses on using structured
textual descriptions. We also consolidate a Multi-Attributed and Structured
Text-to-face (MAST) dataset consisting of high-quality images with structured
textual annotations and make it available to researchers to experiment and
build upon. Lastly, we report benchmark Frechet's Inception Distance (FID),
Facial Semantic Similarity (FSS), and Facial Semantic Distance (FSD) scores for
the MAST dataset.
Related papers
- Vision-Language Matching for Text-to-Image Synthesis via Generative
Adversarial Networks [13.80433764370972]
Text-to-image synthesis aims to generate a photo-realistic and semantic consistent image from a specific text description.
We propose a novel Vision-Language Matching strategy for text-to-image synthesis, named VLMGAN*.
The proposed dual multi-level vision-language matching strategy can be applied to other text-to-image synthesis methods.
arXiv Detail & Related papers (2022-08-20T03:34:04Z) - StyleT2I: Toward Compositional and High-Fidelity Text-to-Image Synthesis [52.341186561026724]
Lacking compositionality could have severe implications for robustness and fairness.
We introduce a new framework, StyleT2I, to improve the compositionality of text-to-image synthesis.
Results show that StyleT2I outperforms previous approaches in terms of consistency between the input text and synthesized images.
arXiv Detail & Related papers (2022-03-29T17:59:50Z) - AnyFace: Free-style Text-to-Face Synthesis and Manipulation [41.61972206254537]
This paper proposes the first free-style text-to-face method namely AnyFace.
AnyFace enables much wider open world applications such as metaverse, social media, cosmetics, forensics, etc.
arXiv Detail & Related papers (2022-03-29T08:27:38Z) - DAE-GAN: Dynamic Aspect-aware GAN for Text-to-Image Synthesis [55.788772366325105]
We propose a Dynamic Aspect-awarE GAN (DAE-GAN) that represents text information comprehensively from multiple granularities, including sentence-level, word-level, and aspect-level.
Inspired by human learning behaviors, we develop a novel Aspect-aware Dynamic Re-drawer (ADR) for image refinement, in which an Attended Global Refinement (AGR) module and an Aspect-aware Local Refinement (ALR) module are alternately employed.
arXiv Detail & Related papers (2021-08-27T07:20:34Z) - Cycle-Consistent Inverse GAN for Text-to-Image Synthesis [101.97397967958722]
We propose a novel unified framework of Cycle-consistent Inverse GAN for both text-to-image generation and text-guided image manipulation tasks.
We learn a GAN inversion model to convert the images back to the GAN latent space and obtain the inverted latent codes for each image.
In the text-guided optimization module, we generate images with the desired semantic attributes by optimizing the inverted latent codes.
arXiv Detail & Related papers (2021-08-03T08:38:16Z) - SynthTIGER: Synthetic Text Image GEneratoR Towards Better Text
Recognition Models [9.934446907923725]
We introduce a new synthetic text image generator, SynthTIGER, by analyzing techniques used for text image synthesis and integrating effective ones under a single algorithm.
In our experiments, SynthTIGER achieves better STR performance than the combination of synthetic datasets.
arXiv Detail & Related papers (2021-07-20T08:03:45Z) - TediGAN: Text-Guided Diverse Face Image Generation and Manipulation [52.83401421019309]
TediGAN is a framework for multi-modal image generation and manipulation with textual descriptions.
StyleGAN inversion module maps real images to the latent space of a well-trained StyleGAN.
visual-linguistic similarity learns the text-image matching by mapping the image and text into a common embedding space.
instance-level optimization is for identity preservation in manipulation.
arXiv Detail & Related papers (2020-12-06T16:20:19Z) - DF-GAN: A Simple and Effective Baseline for Text-to-Image Synthesis [80.54273334640285]
We propose a novel one-stage text-to-image backbone that directly synthesizes high-resolution images without entanglements between different generators.
We also propose a novel Target-Aware Discriminator composed of Matching-Aware Gradient Penalty and One-Way Output.
Compared with current state-of-the-art methods, our proposed DF-GAN is simpler but more efficient to synthesize realistic and text-matching images.
arXiv Detail & Related papers (2020-08-13T12:51:17Z) - Image-to-Image Translation with Text Guidance [139.41321867508722]
The goal of this paper is to embed controllable factors, i.e., natural language descriptions, into image-to-image translation with generative adversarial networks.
We propose four key components: (1) the implementation of part-of-speech tagging to filter out non-semantic words in the given description, (2) the adoption of an affine combination module to effectively fuse different modality text and image features, and (3) a novel refined multi-stage architecture to strengthen the differential ability of discriminators and the rectification ability of generators.
arXiv Detail & Related papers (2020-02-12T21:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.