Directional GAN: A Novel Conditioning Strategy for Generative Networks
- URL: http://arxiv.org/abs/2105.05712v2
- Date: Thu, 13 May 2021 22:04:31 GMT
- Title: Directional GAN: A Novel Conditioning Strategy for Generative Networks
- Authors: Shradha Agrawal, Shankar Venkitachalam, Dhanya Raghu, Deepak Pai
- Abstract summary: We propose a simple and novel conditioning strategy which allows generation of images conditioned on given semantic attributes.
Our approach is based on modifying latent vectors, using directional vectors of relevant semantic attributes in latent space.
We show the applicability of our proposed approach, named Directional GAN, on multiple public datasets, with an average accuracy of 86.4%.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Image content is a predominant factor in marketing campaigns, websites and
banners. Today, marketers and designers spend considerable time and money in
generating such professional quality content. We take a step towards
simplifying this process using Generative Adversarial Networks (GANs). We
propose a simple and novel conditioning strategy which allows generation of
images conditioned on given semantic attributes using a generator trained for
an unconditional image generation task. Our approach is based on modifying
latent vectors, using directional vectors of relevant semantic attributes in
latent space. Our method is designed to work with both discrete (binary and
multi-class) and continuous image attributes. We show the applicability of our
proposed approach, named Directional GAN, on multiple public datasets, with an
average accuracy of 86.4% across different attributes.
Related papers
- SCONE-GAN: Semantic Contrastive learning-based Generative Adversarial
Network for an end-to-end image translation [18.93434486338439]
SCONE-GAN is shown to be effective for learning to generate realistic and diverse scenery images.
For more realistic and diverse image generation we introduce style reference image.
We validate the proposed algorithm for image-to-image translation and stylizing outdoor images.
arXiv Detail & Related papers (2023-11-07T10:29:16Z) - Spatial Latent Representations in Generative Adversarial Networks for
Image Generation [0.0]
We define a family of spatial latent spaces for StyleGAN2.
We show that our spaces are effective for image manipulation and encode semantic information well.
arXiv Detail & Related papers (2023-03-25T20:01:11Z) - Hierarchical Semantic Regularization of Latent Spaces in StyleGANs [53.98170188547775]
We propose a Hierarchical Semantic Regularizer (HSR) which aligns the hierarchical representations learnt by the generator to corresponding powerful features learnt by pretrained networks on large amounts of data.
HSR is shown to not only improve generator representations but also the linearity and smoothness of the latent style spaces, leading to the generation of more natural-looking style-edited images.
arXiv Detail & Related papers (2022-08-07T16:23:33Z) - Towards Diverse and Faithful One-shot Adaption of Generative Adversarial
Networks [54.80435295622583]
One-shot generative domain adaption aims to transfer a pre-trained generator on one domain to a new domain using one reference image only.
We present a novel one-shot generative domain adaption method, i.e., DiFa, for diverse generation and faithful adaptation.
arXiv Detail & Related papers (2022-07-18T16:29:41Z) - Local and Global GANs with Semantic-Aware Upsampling for Image
Generation [201.39323496042527]
We consider generating images using local context.
We propose a class-specific generative network using semantic maps as guidance.
Lastly, we propose a novel semantic-aware upsampling method.
arXiv Detail & Related papers (2022-02-28T19:24:25Z) - One-Shot Generative Domain Adaptation [39.17324951275831]
This work aims at transferring a Generative Adversarial Network (GAN) pre-trained on one image domain to a new domain referring to as few as just one target image.
arXiv Detail & Related papers (2021-11-18T18:55:08Z) - StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators [63.85888518950824]
We present a text-driven method that allows shifting a generative model to new domains.
We show that through natural language prompts and a few minutes of training, our method can adapt a generator across a multitude of domains.
arXiv Detail & Related papers (2021-08-02T14:46:46Z) - SMILE: Semantically-guided Multi-attribute Image and Layout Editing [154.69452301122175]
Attribute image manipulation has been a very active topic since the introduction of Generative Adversarial Networks (GANs)
We present a multimodal representation that handles all attributes, be it guided by random noise or images, while only using the underlying domain information of the target domain.
Our method is capable of adding, removing or changing either fine-grained or coarse attributes by using an image as a reference or by exploring the style distribution space.
arXiv Detail & Related papers (2020-10-05T20:15:21Z) - TriGAN: Image-to-Image Translation for Multi-Source Domain Adaptation [82.52514546441247]
We propose the first approach for Multi-Source Domain Adaptation (MSDA) based on Generative Adversarial Networks.
Our method is inspired by the observation that the appearance of a given image depends on three factors: the domain, the style and the content.
We test our approach using common MSDA benchmarks, showing that it outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-04-19T05:07:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.