GIU-GANs: Global Information Utilization for Generative Adversarial
Networks
- URL: http://arxiv.org/abs/2201.10471v1
- Date: Tue, 25 Jan 2022 17:17:15 GMT
- Title: GIU-GANs: Global Information Utilization for Generative Adversarial
Networks
- Authors: Yongqi Tian, Xueyuan Gong, Jialin Tang, Binghua Su, Xiaoxiang Liu,
Xinyuan Zhang
- Abstract summary: In this paper, we propose a new GANs called Involution Generative Adversarial Networks (GIU-GANs)
GIU-GANs leverages a brand new module called the Global Information Utilization (GIU) module, which integrates Squeeze-and-Excitation Networks (SENet) and involution.
Batch Normalization(BN) inevitably ignores the representation differences among noise sampled by the generator, and thus degrades the generated image quality.
- Score: 3.3945834638760948
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, with the rapid development of artificial intelligence, image
generation based on deep learning has dramatically advanced. Image generation
based on Generative Adversarial Networks (GANs) is a promising study. However,
since convolutions are limited by spatial-agnostic and channel-specific,
features extracted by traditional GANs based on convolution are constrained.
Therefore, GANs are unable to capture any more details per image. On the other
hand, straightforwardly stacking of convolutions causes too many parameters and
layers in GANs, which will lead to a high risk of overfitting. To overcome the
aforementioned limitations, in this paper, we propose a new GANs called
Involution Generative Adversarial Networks (GIU-GANs). GIU-GANs leverages a
brand new module called the Global Information Utilization (GIU) module, which
integrates Squeeze-and-Excitation Networks (SENet) and involution to focus on
global information by channel attention mechanism, leading to a higher quality
of generated images. Meanwhile, Batch Normalization(BN) inevitably ignores the
representation differences among noise sampled by the generator, and thus
degrade the generated image quality. Thus we introduce Representative Batch
Normalization(RBN) to the GANs architecture for this issue. The CIFAR-10 and
CelebA datasets are employed to demonstrate the effectiveness of our proposed
model. A large number of experiments prove that our model achieves
state-of-the-art competitive performance.
Related papers
- U-KAN Makes Strong Backbone for Medical Image Segmentation and Generation [48.40120035775506]
Kolmogorov-Arnold Networks (KANs) reshape the neural network learning via the stack of non-linear learnable activation functions.
We investigate, modify and re-design the established U-Net pipeline by integrating the dedicated KAN layers on the tokenized intermediate representation, termed U-KAN.
We further delved into the potential of U-KAN as an alternative U-Net noise predictor in diffusion models, demonstrating its applicability in generating task-oriented model architectures.
arXiv Detail & Related papers (2024-06-05T04:13:03Z) - LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - Intriguing Property and Counterfactual Explanation of GAN for Remote Sensing Image Generation [25.96740500337747]
Generative adversarial networks (GANs) have achieved remarkable progress in the natural image field.
GAN model is more sensitive to the size of training data for RS image generation than for natural image generation.
We propose two innovative adjustment schemes, namely Uniformity Regularization (UR) and Entropy Regularization (ER), to increase the information learned by the GAN model.
arXiv Detail & Related papers (2023-03-09T13:22:50Z) - TcGAN: Semantic-Aware and Structure-Preserved GANs with Individual
Vision Transformer for Fast Arbitrary One-Shot Image Generation [11.207512995742999]
One-shot image generation (OSG) with generative adversarial networks that learn from the internal patches of a given image has attracted world wide attention.
We propose a novel structure-preserved method TcGAN with individual vision transformer to overcome the shortcomings of the existing one-shot image generation methods.
arXiv Detail & Related papers (2023-02-16T03:05:59Z) - Latent Multi-Relation Reasoning for GAN-Prior based Image
Super-Resolution [61.65012981435095]
LAREN is a graph-based disentanglement that constructs a superior disentangled latent space via hierarchical multi-relation reasoning.
We show that LAREN achieves superior large-factor image SR and outperforms the state-of-the-art consistently across multiple benchmarks.
arXiv Detail & Related papers (2022-08-04T19:45:21Z) - Unsupervised Image Generation with Infinite Generative Adversarial
Networks [24.41144953504398]
We propose a new unsupervised non-parametric method named mixture of infinite conditional GANs or MIC-GANs.
We show that MIC-GANs are effective in structuring the latent space and avoiding mode collapse, and outperform state-of-the-art methods.
arXiv Detail & Related papers (2021-08-18T05:03:19Z) - Towards Discovery and Attribution of Open-world GAN Generated Images [18.10496076534083]
We present an iterative algorithm for discovering images generated from previously unseen GANs.
Our algorithm consists of multiple components including network training, out-of-distribution detection, clustering, merge and refine steps.
Our experiments demonstrate the effectiveness of our approach to discover new GANs and can be used in an open-world setup.
arXiv Detail & Related papers (2021-05-10T18:00:13Z) - Best-Buddy GANs for Highly Detailed Image Super-Resolution [71.13466303340192]
We consider the single image super-resolution (SISR) problem, where a high-resolution (HR) image is generated based on a low-resolution (LR) input.
Most methods along this line rely on a predefined single-LR-single-HR mapping, which is not flexible enough for the SISR task.
We propose best-buddy GANs (Beby-GAN) for rich-detail SISR. Relaxing the immutable one-to-one constraint, we allow the estimated patches to dynamically seek the best supervision.
arXiv Detail & Related papers (2021-03-29T02:58:27Z) - CoDeGAN: Contrastive Disentanglement for Generative Adversarial Network [0.5437298646956507]
Disentanglement, a critical concern in interpretable machine learning, has also garnered significant attention from the computer vision community.
We propose textttCoDeGAN, where we relax similarity constraints for disentanglement from the image domain to the feature domain.
We integrate self-supervised pre-training into CoDeGAN to learn semantic representations, significantly facilitating unsupervised disentanglement.
arXiv Detail & Related papers (2021-03-05T12:44:22Z) - Guiding GANs: How to control non-conditional pre-trained GANs for
conditional image generation [69.10717733870575]
We present a novel method for guiding generic non-conditional GANs to behave as conditional GANs.
Our approach adds into the mix an encoder network to generate the high-dimensional random input that are fed to the generator network of a non-conditional GAN.
arXiv Detail & Related papers (2021-01-04T14:03:32Z) - A U-Net Based Discriminator for Generative Adversarial Networks [86.67102929147592]
We propose an alternative U-Net based discriminator architecture for generative adversarial networks (GANs)
The proposed architecture allows to provide detailed per-pixel feedback to the generator while maintaining the global coherence of synthesized images.
The novel discriminator improves over the state of the art in terms of the standard distribution and image quality metrics.
arXiv Detail & Related papers (2020-02-28T11:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.