Multi-Tailed, Multi-Headed, Spatial Dynamic Memory refined Text-to-Image
Synthesis
- URL: http://arxiv.org/abs/2110.08143v1
- Date: Fri, 15 Oct 2021 15:16:58 GMT
- Title: Multi-Tailed, Multi-Headed, Spatial Dynamic Memory refined Text-to-Image
Synthesis
- Authors: Amrit Diggavi Seshadri, Balaraman Ravindran
- Abstract summary: Current methods synthesize images from text in a multi-stage manner, typically by first generating a rough initial image and then refining image details at subsequent stages.
Our proposed method introduces three novel components to address these shortcomings.
Experimental results demonstrate that our Multi-Headed Spatial Dynamic Memory image refinement with our Multi-Tailed Word-level Initial Generation (MSMT-GAN) performs favourably against the previous state of the art on the CUB and COCO datasets.
- Score: 21.673771194165276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Synthesizing high-quality, realistic images from text-descriptions is a
challenging task, and current methods synthesize images from text in a
multi-stage manner, typically by first generating a rough initial image and
then refining image details at subsequent stages. However, existing methods
that follow this paradigm suffer from three important limitations. Firstly,
they synthesize initial images without attempting to separate image attributes
at a word-level. As a result, object attributes of initial images (that provide
a basis for subsequent refinement) are inherently entangled and ambiguous in
nature. Secondly, by using common text-representations for all regions, current
methods prevent us from interpreting text in fundamentally different ways at
different parts of an image. Different image regions are therefore only allowed
to assimilate the same type of information from text at each refinement stage.
Finally, current methods generate refinement features only once at each
refinement stage and attempt to address all image aspects in a single shot.
This single-shot refinement limits the precision with which each refinement
stage can learn to improve the prior image. Our proposed method introduces
three novel components to address these shortcomings: (1) An initial generation
stage that explicitly generates separate sets of image features for each word
n-gram. (2) A spatial dynamic memory module for refinement of images. (3) An
iterative multi-headed mechanism to make it easier to improve upon multiple
image aspects. Experimental results demonstrate that our Multi-Headed Spatial
Dynamic Memory image refinement with our Multi-Tailed Word-level Initial
Generation (MSMT-GAN) performs favourably against the previous state of the art
on the CUB and COCO datasets.
Related papers
- HumanDiffusion: a Coarse-to-Fine Alignment Diffusion Framework for
Controllable Text-Driven Person Image Generation [73.3790833537313]
Controllable person image generation promotes a wide range of applications such as digital human interaction and virtual try-on.
We propose HumanDiffusion, a coarse-to-fine alignment diffusion framework, for text-driven person image generation.
arXiv Detail & Related papers (2022-11-11T14:30:34Z) - Adma-GAN: Attribute-Driven Memory Augmented GANs for Text-to-Image
Generation [18.36261166580862]
Text-to-image generation aims to generate photo-realistic and semantically consistent images according to the given text descriptions.
Existing methods mainly extract the text information from only one sentence to represent an image.
We propose an effective text representation method with the complements of attribute information.
arXiv Detail & Related papers (2022-09-28T12:28:54Z) - DSE-GAN: Dynamic Semantic Evolution Generative Adversarial Network for
Text-to-Image Generation [71.87682778102236]
We propose a novel Dynamical Semantic Evolution GAN (DSE-GAN) to re-compose each stage's text features under a novel single adversarial multi-stage architecture.
DSE-GAN achieves 7.48% and 37.8% relative FID improvement on two widely used benchmarks.
arXiv Detail & Related papers (2022-09-03T06:13:26Z) - Fully Context-Aware Image Inpainting with a Learned Semantic Pyramid [102.24539566851809]
Restoring reasonable and realistic content for arbitrary missing regions in images is an important yet challenging task.
Recent image inpainting models have made significant progress in generating vivid visual details, but they can still lead to texture blurring or structural distortions.
We propose the Semantic Pyramid Network (SPN) motivated by the idea that learning multi-scale semantic priors can greatly benefit the recovery of locally missing content in images.
arXiv Detail & Related papers (2021-12-08T04:33:33Z) - DAE-GAN: Dynamic Aspect-aware GAN for Text-to-Image Synthesis [55.788772366325105]
We propose a Dynamic Aspect-awarE GAN (DAE-GAN) that represents text information comprehensively from multiple granularities, including sentence-level, word-level, and aspect-level.
Inspired by human learning behaviors, we develop a novel Aspect-aware Dynamic Re-drawer (ADR) for image refinement, in which an Attended Global Refinement (AGR) module and an Aspect-aware Local Refinement (ALR) module are alternately employed.
arXiv Detail & Related papers (2021-08-27T07:20:34Z) - TediGAN: Text-Guided Diverse Face Image Generation and Manipulation [52.83401421019309]
TediGAN is a framework for multi-modal image generation and manipulation with textual descriptions.
StyleGAN inversion module maps real images to the latent space of a well-trained StyleGAN.
visual-linguistic similarity learns the text-image matching by mapping the image and text into a common embedding space.
instance-level optimization is for identity preservation in manipulation.
arXiv Detail & Related papers (2020-12-06T16:20:19Z) - DF-GAN: A Simple and Effective Baseline for Text-to-Image Synthesis [80.54273334640285]
We propose a novel one-stage text-to-image backbone that directly synthesizes high-resolution images without entanglements between different generators.
We also propose a novel Target-Aware Discriminator composed of Matching-Aware Gradient Penalty and One-Way Output.
Compared with current state-of-the-art methods, our proposed DF-GAN is simpler but more efficient to synthesize realistic and text-matching images.
arXiv Detail & Related papers (2020-08-13T12:51:17Z) - PerceptionGAN: Real-world Image Construction from Provided Text through
Perceptual Understanding [11.985768957782641]
We propose a method to provide good images by incorporating perceptual understanding in the discriminator module.
We show that the perceptual information included in the initial image is improved while modeling image distribution at multiple stages.
More importantly, the proposed method can be integrated into the pipeline of other state-of-the-art text-based-image-generation models.
arXiv Detail & Related papers (2020-07-02T09:23:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.