Unsupervised Image Generation with Infinite Generative Adversarial
Networks
- URL: http://arxiv.org/abs/2108.07975v1
- Date: Wed, 18 Aug 2021 05:03:19 GMT
- Title: Unsupervised Image Generation with Infinite Generative Adversarial
Networks
- Authors: Hui Ying, He Wang, Tianjia Shao, Yin Yang, Kun Zhou
- Abstract summary: We propose a new unsupervised non-parametric method named mixture of infinite conditional GANs or MIC-GANs.
We show that MIC-GANs are effective in structuring the latent space and avoiding mode collapse, and outperform state-of-the-art methods.
- Score: 24.41144953504398
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image generation has been heavily investigated in computer vision, where one
core research challenge is to generate images from arbitrarily complex
distributions with little supervision. Generative Adversarial Networks (GANs)
as an implicit approach have achieved great successes in this direction and
therefore been employed widely. However, GANs are known to suffer from issues
such as mode collapse, non-structured latent space, being unable to compute
likelihoods, etc. In this paper, we propose a new unsupervised non-parametric
method named mixture of infinite conditional GANs or MIC-GANs, to tackle
several GAN issues together, aiming for image generation with parsimonious
prior knowledge. Through comprehensive evaluations across different datasets,
we show that MIC-GANs are effective in structuring the latent space and
avoiding mode collapse, and outperform state-of-the-art methods. MICGANs are
adaptive, versatile, and robust. They offer a promising solution to several
well-known GAN issues. Code available: github.com/yinghdb/MICGANs.
Related papers
- Latent Space is Feature Space: Regularization Term for GANs Training on
Limited Dataset [1.8634083978855898]
I proposed an additional structure and loss function for GANs called LFM, trained to maximize the feature diversity between the different dimensions of the latent space.
In experiments, this system has been built upon DCGAN and proved to have improvement on Frechet Inception Distance (FID) training from scratch on CelebA dataset.
arXiv Detail & Related papers (2022-10-28T16:34:48Z) - GIU-GANs: Global Information Utilization for Generative Adversarial
Networks [3.3945834638760948]
In this paper, we propose a new GANs called Involution Generative Adversarial Networks (GIU-GANs)
GIU-GANs leverages a brand new module called the Global Information Utilization (GIU) module, which integrates Squeeze-and-Excitation Networks (SENet) and involution.
Batch Normalization(BN) inevitably ignores the representation differences among noise sampled by the generator, and thus degrades the generated image quality.
arXiv Detail & Related papers (2022-01-25T17:17:15Z) - A Method for Evaluating Deep Generative Models of Images via Assessing
the Reproduction of High-order Spatial Context [9.00018232117916]
Generative adversarial networks (GANs) are one kind of DGM which are widely employed.
In this work, we demonstrate several objective tests of images output by two popular GAN architectures.
We designed several context models (SCMs) of distinct image features that can be recovered after generation by a trained GAN.
arXiv Detail & Related papers (2021-11-24T15:58:10Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - InfinityGAN: Towards Infinite-Resolution Image Synthesis [92.40782797030977]
We present InfinityGAN, a method to generate arbitrary-resolution images.
We show how it trains and infers patch-by-patch seamlessly with low computational resources.
arXiv Detail & Related papers (2021-04-08T17:59:30Z) - Best-Buddy GANs for Highly Detailed Image Super-Resolution [71.13466303340192]
We consider the single image super-resolution (SISR) problem, where a high-resolution (HR) image is generated based on a low-resolution (LR) input.
Most methods along this line rely on a predefined single-LR-single-HR mapping, which is not flexible enough for the SISR task.
We propose best-buddy GANs (Beby-GAN) for rich-detail SISR. Relaxing the immutable one-to-one constraint, we allow the estimated patches to dynamically seek the best supervision.
arXiv Detail & Related papers (2021-03-29T02:58:27Z) - CoDeGAN: Contrastive Disentanglement for Generative Adversarial Network [0.5437298646956507]
Disentanglement, a critical concern in interpretable machine learning, has also garnered significant attention from the computer vision community.
We propose textttCoDeGAN, where we relax similarity constraints for disentanglement from the image domain to the feature domain.
We integrate self-supervised pre-training into CoDeGAN to learn semantic representations, significantly facilitating unsupervised disentanglement.
arXiv Detail & Related papers (2021-03-05T12:44:22Z) - Rethinking conditional GAN training: An approach using geometrically
structured latent manifolds [58.07468272236356]
Conditional GANs (cGAN) suffer from critical drawbacks such as the lack of diversity in generated outputs.
We propose a novel training mechanism that increases both the diversity and the visual quality of a vanilla cGAN.
arXiv Detail & Related papers (2020-11-25T22:54:11Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z) - Optimizing Generative Adversarial Networks for Image Super Resolution
via Latent Space Regularization [4.529132742139768]
Generative Adversarial Networks (GANs) try to learn the distribution of the real images in the manifold to generate samples that look real.
We probe for ways to alleviate these problems for supervised GANs in this paper.
arXiv Detail & Related papers (2020-01-22T16:27:20Z) - Unsupervised Domain Adaptation in Person re-ID via k-Reciprocal
Clustering and Large-Scale Heterogeneous Environment Synthesis [76.46004354572956]
We introduce an unsupervised domain adaptation approach for person re-identification.
Experimental results show that the proposed ktCUDA and SHRED approach achieves an average improvement of +5.7 mAP in re-identification performance.
arXiv Detail & Related papers (2020-01-14T17:43:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.