Total Generate: Cycle in Cycle Generative Adversarial Networks for
Generating Human Faces, Hands, Bodies, and Natural Scenes
- URL: http://arxiv.org/abs/2106.10876v1
- Date: Mon, 21 Jun 2021 06:20:16 GMT
- Title: Total Generate: Cycle in Cycle Generative Adversarial Networks for
Generating Human Faces, Hands, Bodies, and Natural Scenes
- Authors: Hao Tang, Nicu Sebe
- Abstract summary: Cycle in Cycle Generative Adversarial Network (C2GAN) for human faces, hands, bodies, and natural scenes.
Our proposed C2GAN is a cross-modal model exploring the joint exploitation of the input image data and guidance data in an interactive manner.
- Score: 76.83075646527521
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel and unified Cycle in Cycle Generative Adversarial Network
(C2GAN) for generating human faces, hands, bodies, and natural scenes. Our
proposed C2GAN is a cross-modal model exploring the joint exploitation of the
input image data and guidance data in an interactive manner. C2GAN contains two
different generators, i.e., an image-generation generator and a
guidance-generation generator. Both generators are mutually connected and
trained in an end-to-end fashion and explicitly form three cycled subnets,
i.e., one image generation cycle and two guidance generation cycles. Each cycle
aims at reconstructing the input domain and simultaneously produces a useful
output involved in the generation of another cycle. In this way, the cycles
constrain each other implicitly providing complementary information from both
image and guidance modalities and bringing an extra supervision gradient across
the cycles, facilitating a more robust optimization of the whole model.
Extensive results on four guided image-to-image translation subtasks
demonstrate that the proposed C2GAN is effective in generating more realistic
images compared with state-of-the-art models. The code is available at
https://github.com/Ha0Tang/C2GAN.
Related papers
- SeaDAG: Semi-autoregressive Diffusion for Conditional Directed Acyclic Graph Generation [83.52157311471693]
We introduce SeaDAG, a semi-autoregressive diffusion model for conditional generation of Directed Acyclic Graphs (DAGs)
Unlike conventional autoregressive generation that lacks a global graph structure view, our method maintains a complete graph structure at each diffusion step.
We explicitly train the model to learn graph conditioning with a condition loss, which enhances the diffusion model's capacity to generate realistic DAGs.
arXiv Detail & Related papers (2024-10-21T15:47:03Z) - Object-Centric Relational Representations for Image Generation [18.069747511100132]
This paper explores a novel method to condition image generation, based on object-centric relational representations.
We show that such architectural biases entail properties that facilitate the manipulation and conditioning of the generative process.
We also propose a novel benchmark for image generation consisting of a synthetic dataset of images paired with their relational representation.
arXiv Detail & Related papers (2023-03-26T11:17:17Z) - Cross-View Panorama Image Synthesis [68.35351563852335]
PanoGAN is a novel adversarial feedback GAN framework named.
PanoGAN enables high-quality panorama image generation with more convincing details than state-of-the-art approaches.
arXiv Detail & Related papers (2022-03-22T15:59:44Z) - Cycle-Consistent Inverse GAN for Text-to-Image Synthesis [101.97397967958722]
We propose a novel unified framework of Cycle-consistent Inverse GAN for both text-to-image generation and text-guided image manipulation tasks.
We learn a GAN inversion model to convert the images back to the GAN latent space and obtain the inverted latent codes for each image.
In the text-guided optimization module, we generate images with the desired semantic attributes by optimizing the inverted latent codes.
arXiv Detail & Related papers (2021-08-03T08:38:16Z) - Cycle-free CycleGAN using Invertible Generator for Unsupervised Low-Dose
CT Denoising [33.79188588182528]
CycleGAN provides high-performance, ultra-fast denoising for low-dose X-ray computed tomography (CT) images.
CycleGAN requires two generators and two discriminators to enforce cycle consistency.
We present a novel cycle-free Cycle-GAN architecture, which consists of a single generator and a discriminator but still guarantees cycle consistency.
arXiv Detail & Related papers (2021-04-17T13:23:36Z) - Improved Image Generation via Sparse Modeling [27.66648389933265]
We show that generators can be viewed as manifestations of the Convolutional Sparse Coding (CSC) and its Multi-Layered version (ML-CSC) synthesis processes.
We leverage this observation by explicitly enforcing a sparsifying regularization on appropriately chosen activation layers in the generator.
arXiv Detail & Related papers (2021-04-01T13:52:40Z) - Guiding GANs: How to control non-conditional pre-trained GANs for
conditional image generation [69.10717733870575]
We present a novel method for guiding generic non-conditional GANs to behave as conditional GANs.
Our approach adds into the mix an encoder network to generate the high-dimensional random input that are fed to the generator network of a non-conditional GAN.
arXiv Detail & Related papers (2021-01-04T14:03:32Z) - XingGAN for Person Image Generation [149.54517767056382]
We propose a novel Generative Adversarial Network (XingGAN) for person image generation tasks.
XingGAN consists of two generation branches that model the person's appearance and shape information.
We show that the proposed XingGAN advances the state-of-the-art performance in terms of objective quantitative scores and subjective visual realness.
arXiv Detail & Related papers (2020-07-17T23:40:22Z) - CDGAN: Cyclic Discriminative Generative Adversarial Networks for
Image-to-Image Transformation [17.205434613674104]
We introduce a new Image-to-Image Transformation network named Cyclic Discriminative Generative Adversarial Networks (CDGAN)
The proposed CDGAN generates high quality and more realistic images by incorporating the additional discriminator networks for cycled images.
The quantitative and qualitative results are analyzed and compared with the state-of-the-art methods.
arXiv Detail & Related papers (2020-01-15T05:12:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.