A Comprehensive Survey on Data-Efficient GANs in Image Generation
- URL: http://arxiv.org/abs/2204.08329v1
- Date: Mon, 18 Apr 2022 14:14:09 GMT
- Title: A Comprehensive Survey on Data-Efficient GANs in Image Generation
- Authors: Ziqiang Li, Xintian Wu, Beihao Xia, Jing Zhang, Chaoyue Wang, Bin Li
- Abstract summary: Generative Adversarial Networks (GANs) have achieved remarkable achievements in image synthesis.
With limited training data, how to stable the training process of GANs and generate realistic images have attracted more attention.
The challenges of Data-Efficient GANs (DE-GANs) mainly arise from three aspects: (i) Mismatch Between Training and Target Distributions, (ii) Overfitting of the Discriminator, and (iii) Imbalance Between Latent and Data Spaces.
- Score: 21.03377218098632
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Adversarial Networks (GANs) have achieved remarkable achievements
in image synthesis. These successes of GANs rely on large scale datasets,
requiring too much cost. With limited training data, how to stable the training
process of GANs and generate realistic images have attracted more attention.
The challenges of Data-Efficient GANs (DE-GANs) mainly arise from three
aspects: (i) Mismatch Between Training and Target Distributions, (ii)
Overfitting of the Discriminator, and (iii) Imbalance Between Latent and Data
Spaces. Although many augmentation and pre-training strategies have been
proposed to alleviate these issues, there lacks a systematic survey to
summarize the properties, challenges, and solutions of DE-GANs. In this paper,
we revisit and define DE-GANs from the perspective of distribution
optimization. We conclude and analyze the challenges of DE-GANs. Meanwhile, we
propose a taxonomy, which classifies the existing methods into three
categories: Data Selection, GANs Optimization, and Knowledge Sharing. Last but
not the least, we attempt to highlight the current problems and the future
directions.
Related papers
- Diffusion Models as Network Optimizers: Explorations and Analysis [71.69869025878856]
generative diffusion models (GDMs) have emerged as a promising new approach to network optimization.
In this study, we first explore the intrinsic characteristics of generative models.
We provide a concise theoretical and intuitive demonstration of the advantages of generative models over discriminative network optimization.
arXiv Detail & Related papers (2024-11-01T09:05:47Z) - On the Convergence of (Stochastic) Gradient Descent for Kolmogorov--Arnold Networks [56.78271181959529]
Kolmogorov--Arnold Networks (KANs) have gained significant attention in the deep learning community.
Empirical investigations demonstrate that KANs optimized via gradient descent (SGD) are capable of achieving near-zero training loss.
arXiv Detail & Related papers (2024-10-10T15:34:10Z) - Advanced Data Augmentation Approaches: A Comprehensive Survey and Future
directions [57.30984060215482]
We provide a background of data augmentation, a novel and comprehensive taxonomy of reviewed data augmentation techniques, and the strengths and weaknesses (wherever possible) of each technique.
We also provide comprehensive results of the data augmentation effect on three popular computer vision tasks, such as image classification, object detection and semantic segmentation.
arXiv Detail & Related papers (2023-01-07T11:37:32Z) - Analyzing the Effect of Sampling in GNNs on Individual Fairness [79.28449844690566]
Graph neural network (GNN) based methods have saturated the field of recommender systems.
We extend an existing method for promoting individual fairness on graphs to support mini-batch, or sub-sample based, training of a GNN.
We show that mini-batch training facilitate individual fairness promotion by allowing for local nuance to guide the process of fairness promotion in representation learning.
arXiv Detail & Related papers (2022-09-08T16:20:25Z) - Tackling Long-Tailed Category Distribution Under Domain Shifts [50.21255304847395]
Existing approaches cannot handle the scenario where both issues exist.
We designed three novel core functional blocks including Distribution Calibrated Classification Loss, Visual-Semantic Mapping and Semantic-Similarity Guided Augmentation.
Two new datasets were proposed for this problem, named AWA2-LTS and ImageNet-LTS.
arXiv Detail & Related papers (2022-07-20T19:07:46Z) - FakeCLR: Exploring Contrastive Learning for Solving Latent Discontinuity
in Data-Efficient GANs [24.18718734850797]
Data-Efficient GANs (DE-GANs) aim to learn generative models with a limited amount of training data.
Contrastive learning has shown the great potential of increasing the synthesis quality of DE-GANs.
We propose FakeCLR, which only applies contrastive learning on fake samples.
arXiv Detail & Related papers (2022-07-18T14:23:38Z) - PriorGAN: Real Data Prior for Generative Adversarial Nets [36.01759301994946]
We propose a novel prior that captures the whole real data distribution for GANs, which are called PriorGANs.
Our experiments demonstrate that PriorGANs outperform the state-of-the-art on the CIFAR-10, FFHQ, LSUN-cat, and LSUN-bird datasets by large margins.
arXiv Detail & Related papers (2020-06-30T17:51:47Z) - Generative Adversarial Networks (GANs Survey): Challenges, Solutions,
and Future Directions [15.839877885431806]
Generative Adversarial Networks (GANs) is a novel class of deep generative models which has recently gained significant attention.
GANs learns complex and high-dimensional distributions implicitly over images, audio, and data.
There exists major challenges in training of GANs, i.e., mode collapse, non-convergence and instability.
arXiv Detail & Related papers (2020-04-30T19:26:46Z) - Recommender Systems Based on Generative Adversarial Networks: A
Problem-Driven Perspective [27.11589218811911]
generative adversarial networks (GANs) have garnered increased interest in many fields, owing to their strong capacity to learn complex real data distributions.
In this paper, we propose a taxonomy of these models, along with their detailed descriptions and advantages.
arXiv Detail & Related papers (2020-03-05T08:05:38Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.