MontageGAN: Generation and Assembly of Multiple Components by GANs
- URL: http://arxiv.org/abs/2205.15577v1
- Date: Tue, 31 May 2022 07:34:19 GMT
- Title: MontageGAN: Generation and Assembly of Multiple Components by GANs
- Authors: Chean Fei Shee, Seiichi Uchida
- Abstract summary: We propose MontageGAN, which is a Generative Adversarial Networks framework for generating multi-layer images.
Our method utilized a two-step approach consisting of local GANs and global GAN.
- Score: 11.117357750374035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A multi-layer image is more valuable than a single-layer image from a graphic
designer's perspective. However, most of the proposed image generation methods
so far focus on single-layer images. In this paper, we propose MontageGAN,
which is a Generative Adversarial Networks (GAN) framework for generating
multi-layer images. Our method utilized a two-step approach consisting of local
GANs and global GAN. Each local GAN learns to generate a specific image layer,
and the global GAN learns the placement of each generated image layer. Through
our experiments, we show the ability of our method to generate multi-layer
images and estimate the placement of the generated image layers.
Related papers
- LayerDiff: Exploring Text-guided Multi-layered Composable Image Synthesis via Layer-Collaborative Diffusion Model [70.14953942532621]
Layer-collaborative diffusion model, named LayerDiff, is designed for text-guided, multi-layered, composable image synthesis.
Our model can generate high-quality multi-layered images with performance comparable to conventional whole-image generation methods.
LayerDiff enables a broader range of controllable generative applications, including layer-specific image editing and style transfer.
arXiv Detail & Related papers (2024-03-18T16:28:28Z) - Text2Layer: Layered Image Generation using Latent Diffusion Model [12.902259486204898]
We propose to generate layered images from a layered image generation perspective.
To achieve layered image generation, we train an autoencoder that is able to reconstruct layered images.
Experimental results show that the proposed method is able to generate high-quality layered images.
arXiv Detail & Related papers (2023-07-19T06:56:07Z) - Using a Conditional Generative Adversarial Network to Control the
Statistical Characteristics of Generated Images for IACT Data Analysis [55.41644538483948]
We divide images into several classes according to the value of some property of the image, and then specify the required class when generating new images.
In the case of images from Imaging Atmospheric Cherenkov Telescopes (IACTs), an important property is the total brightness of all image pixels (image size)
We used a cGAN technique to generate images similar to whose obtained in the TAIGA-IACT experiment.
arXiv Detail & Related papers (2022-11-28T22:30:33Z) - InsetGAN for Full-Body Image Generation [90.71033704904629]
We propose a novel method to combine multiple pretrained GANs.
One GAN generates a global canvas (e.g., human body) and a set of specialized GANs, or insets, focus on different parts.
We demonstrate the setup by combining a full body GAN with a dedicated high-quality face GAN to produce plausible-looking humans.
arXiv Detail & Related papers (2022-03-14T17:01:46Z) - Local and Global GANs with Semantic-Aware Upsampling for Image
Generation [201.39323496042527]
We consider generating images using local context.
We propose a class-specific generative network using semantic maps as guidance.
Lastly, we propose a novel semantic-aware upsampling method.
arXiv Detail & Related papers (2022-02-28T19:24:25Z) - Collaging Class-specific GANs for Semantic Image Synthesis [68.87294033259417]
We propose a new approach for high resolution semantic image synthesis.
It consists of one base image generator and multiple class-specific generators.
Experiments show that our approach can generate high quality images in high resolution.
arXiv Detail & Related papers (2021-10-08T17:46:56Z) - Detection, Attribution and Localization of GAN Generated Images [24.430919035100317]
We propose a novel approach to detect, attribute and localize GAN generated images.
A deep learning network is then trained on these features to detect, attribute and localize these GAN generated/manipulated images.
A large scale evaluation of our approach on 5 GAN datasets shows promising results in detecting GAN generated images.
arXiv Detail & Related papers (2020-07-20T20:49:34Z) - Local Class-Specific and Global Image-Level Generative Adversarial
Networks for Semantic-Guided Scene Generation [135.4660201856059]
We consider learning the scene generation in a local context, and design a local class-specific generative network with semantic maps as a guidance.
To learn more discrimi class-specific feature representations for the local generation, a novel classification module is also proposed.
Experiments on two scene image generation tasks show superior generation performance of the proposed model.
arXiv Detail & Related papers (2019-12-27T16:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.