Generative Adversarial Networks (GANs Survey): Challenges, Solutions,
and Future Directions
- URL: http://arxiv.org/abs/2005.00065v4
- Date: Wed, 5 Apr 2023 09:11:18 GMT
- Title: Generative Adversarial Networks (GANs Survey): Challenges, Solutions,
and Future Directions
- Authors: Divya Saxena, Jiannong Cao
- Abstract summary: Generative Adversarial Networks (GANs) is a novel class of deep generative models which has recently gained significant attention.
GANs learns complex and high-dimensional distributions implicitly over images, audio, and data.
There exists major challenges in training of GANs, i.e., mode collapse, non-convergence and instability.
- Score: 15.839877885431806
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Adversarial Networks (GANs) is a novel class of deep generative
models which has recently gained significant attention. GANs learns complex and
high-dimensional distributions implicitly over images, audio, and data.
However, there exists major challenges in training of GANs, i.e., mode
collapse, non-convergence and instability, due to inappropriate design of
network architecture, use of objective function and selection of optimization
algorithm. Recently, to address these challenges, several solutions for better
design and optimization of GANs have been investigated based on techniques of
re-engineered network architectures, new objective functions and alternative
optimization algorithms. To the best of our knowledge, there is no existing
survey that has particularly focused on broad and systematic developments of
these solutions. In this study, we perform a comprehensive survey of the
advancements in GANs design and optimization solutions proposed to handle GANs
challenges. We first identify key research issues within each design and
optimization technique and then propose a new taxonomy to structure solutions
by key research issues. In accordance with the taxonomy, we provide a detailed
discussion on different GANs variants proposed within each solution and their
relationships. Finally, based on the insights gained, we present the promising
research directions in this rapidly growing field.
Related papers
- Diffusion Models as Network Optimizers: Explorations and Analysis [71.69869025878856]
generative diffusion models (GDMs) have emerged as a promising new approach to network optimization.
In this study, we first explore the intrinsic characteristics of generative models.
We provide a concise theoretical and intuitive demonstration of the advantages of generative models over discriminative network optimization.
arXiv Detail & Related papers (2024-11-01T09:05:47Z) - DiffSG: A Generative Solver for Network Optimization with Diffusion Model [75.27274046562806]
Diffusion generative models can consider a broader range of solutions and exhibit stronger generalization by learning parameters.
We propose a new framework, which leverages intrinsic distribution learning of diffusion generative models to learn high-quality solutions.
arXiv Detail & Related papers (2024-08-13T07:56:21Z) - Unraveling the Versatility and Impact of Multi-Objective Optimization: Algorithms, Applications, and Trends for Solving Complex Real-World Problems [4.023511716339818]
Multi-Objective Optimization (MOO) techniques have become increasingly popular in recent years.
This paper examines recently developed MOO-based algorithms.
In real-world case studies, MOO algorithms address complicated decision-making challenges.
arXiv Detail & Related papers (2024-06-29T15:19:46Z) - Bridging Evolutionary Algorithms and Reinforcement Learning: A Comprehensive Survey on Hybrid Algorithms [50.91348344666895]
Evolutionary Reinforcement Learning (ERL) integrates Evolutionary Algorithms (EAs) and Reinforcement Learning (RL) for optimization.
This survey offers a comprehensive overview of the diverse research branches in ERL.
arXiv Detail & Related papers (2024-01-22T14:06:37Z) - The Efficiency Spectrum of Large Language Models: An Algorithmic Survey [54.19942426544731]
The rapid growth of Large Language Models (LLMs) has been a driving force in transforming various domains.
This paper examines the multi-faceted dimensions of efficiency essential for the end-to-end algorithmic development of LLMs.
arXiv Detail & Related papers (2023-12-01T16:00:25Z) - Ten Years of Generative Adversarial Nets (GANs): A survey of the
state-of-the-art [0.0]
Generative Adversarial Networks (GANs) have rapidly emerged as powerful tools for generating realistic and diverse data across various domains.
In February 2018, GAN secured the leading spot on the Top Ten Global Breakthrough Technologies List''
This survey aims to provide a general overview of GANs, summarizing the latent architecture, validation metrics, and application areas of the most widely recognized variants.
arXiv Detail & Related papers (2023-08-30T20:46:45Z) - A Comprehensive Survey on Data-Efficient GANs in Image Generation [21.03377218098632]
Generative Adversarial Networks (GANs) have achieved remarkable achievements in image synthesis.
With limited training data, how to stable the training process of GANs and generate realistic images have attracted more attention.
The challenges of Data-Efficient GANs (DE-GANs) mainly arise from three aspects: (i) Mismatch Between Training and Target Distributions, (ii) Overfitting of the Discriminator, and (iii) Imbalance Between Latent and Data Spaces.
arXiv Detail & Related papers (2022-04-18T14:14:09Z) - A Design Space Study for LISTA and Beyond [79.76740811464597]
In recent years, great success has been witnessed in building problem-specific deep networks from unrolling iterative algorithms.
This paper revisits the role of unrolling as a design approach for deep networks, to what extent its resulting special architecture is superior, and can we find better?
Using LISTA for sparse recovery as a representative example, we conduct the first thorough design space study for the unrolled models.
arXiv Detail & Related papers (2021-04-08T23:01:52Z) - Exploring the Evolution of GANs through Quality Diversity [0.4588028371034407]
We propose the application of a quality-diversity algorithm in the evolution of GANs.
We compare our proposal with the original COEGAN model and with an alternative version using a global competition approach.
arXiv Detail & Related papers (2020-07-13T08:54:52Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.