Regularization Methods for Generative Adversarial Networks: An Overview
of Recent Studies
- URL: http://arxiv.org/abs/2005.09165v1
- Date: Tue, 19 May 2020 01:59:24 GMT
- Title: Regularization Methods for Generative Adversarial Networks: An Overview
of Recent Studies
- Authors: Minhyeok Lee, Junhee Seok
- Abstract summary: Generative Adversarial Network (GAN) has been extensively studied and used for various tasks.
Regularization methods have been proposed to make the training of GAN stable.
- Score: 3.829070379776576
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite its short history, Generative Adversarial Network (GAN) has been
extensively studied and used for various tasks, including its original purpose,
i.e., synthetic sample generation. However, applying GAN to different data
types with diverse neural network architectures has been hindered by its
limitation in training, where the model easily diverges. Such a notorious
training of GANs is well known and has been addressed in numerous studies.
Consequently, in order to make the training of GAN stable, numerous
regularization methods have been proposed in recent years. This paper reviews
the regularization methods that have been recently introduced, most of which
have been published in the last three years. Specifically, we focus on general
methods that can be commonly used regardless of neural network architectures.
To explore the latest research trends in the regularization for GANs, the
methods are classified into several groups by their operation principles, and
the differences between the methods are analyzed. Furthermore, to provide
practical knowledge of using these methods, we investigate popular methods that
have been frequently employed in state-of-the-art GANs. In addition, we discuss
the limitations in existing methods and propose future research directions.
Related papers
- A singular Riemannian Geometry Approach to Deep Neural Networks III. Piecewise Differentiable Layers and Random Walks on $n$-dimensional Classes [49.32130498861987]
We study the case of non-differentiable activation functions, such as ReLU.
Two recent works introduced a geometric framework to study neural networks.
We illustrate our findings with some numerical experiments on classification of images and thermodynamic problems.
arXiv Detail & Related papers (2024-04-09T08:11:46Z) - ODE Discovery for Longitudinal Heterogeneous Treatment Effects Inference [69.24516189971929]
In this paper, we introduce a new type of solution in the longitudinal setting: a closed-form ordinary differential equation (ODE)
While we still rely on continuous optimization to learn an ODE, the resulting inference machine is no longer a neural network.
arXiv Detail & Related papers (2024-03-16T02:07:45Z) - A Recent Survey of Heterogeneous Transfer Learning [15.830786437956144]
heterogeneous transfer learning has become a vital strategy in various tasks.
We offer an extensive review of over 60 HTL methods, covering both data-based and model-based approaches.
We explore applications in natural language processing, computer vision, multimodal learning, and biomedicine.
arXiv Detail & Related papers (2023-10-12T16:19:58Z) - Do Deep Neural Networks Contribute to Multivariate Time Series Anomaly
Detection? [12.419938668514042]
We study the anomaly detection performance of sixteen conventional, machine learning-based and, deep neural network approaches.
By analyzing and comparing the performance of each of the sixteen methods, we show that no family of methods outperforms the others.
arXiv Detail & Related papers (2022-04-04T16:32:49Z) - Recent Few-Shot Object Detection Algorithms: A Survey with Performance
Comparison [54.357707168883024]
Few-Shot Object Detection (FSOD) mimics the humans' ability of learning to learn.
FSOD intelligently transfers the learned generic object knowledge from the common heavy-tailed, to the novel long-tailed object classes.
We give an overview of FSOD, including the problem definition, common datasets, and evaluation protocols.
arXiv Detail & Related papers (2022-03-27T04:11:28Z) - A Survey of Community Detection Approaches: From Statistical Modeling to
Deep Learning [95.27249880156256]
We develop and present a unified architecture of network community-finding methods.
We introduce a new taxonomy that divides the existing methods into two categories, namely probabilistic graphical model and deep learning.
We conclude with discussions of the challenges of the field and suggestions of possible directions for future research.
arXiv Detail & Related papers (2021-01-03T02:32:45Z) - Continual Learning for Natural Language Generation in Task-oriented
Dialog Systems [72.92029584113676]
Natural language generation (NLG) is an essential component of task-oriented dialog systems.
We study NLG in a "continual learning" setting to expand its knowledge to new domains or functionalities incrementally.
The major challenge towards this goal is catastrophic forgetting, meaning that a continually trained model tends to forget the knowledge it has learned before.
arXiv Detail & Related papers (2020-10-02T10:32:29Z) - A Unifying Review of Deep and Shallow Anomaly Detection [38.202998314502786]
We aim to identify the common underlying principles as well as the assumptions that are often made implicitly by various methods.
We provide an empirical assessment of major existing methods that is enriched by the use of recent explainability techniques.
We outline critical open challenges and identify specific paths for future research in anomaly detection.
arXiv Detail & Related papers (2020-09-24T14:47:54Z) - A Systematic Survey of Regularization and Normalization in GANs [25.188671290175208]
Generative Adversarial Networks (GANs) have been widely applied in different scenarios thanks to the development of deep neural networks.
It is still unknown whether GANs can fit the target distribution without any prior information.
Regularization and normalization are common methods of introducing prior information to stabilize training and improve discrimination.
arXiv Detail & Related papers (2020-08-19T12:52:10Z) - A Survey on Generative Adversarial Networks: Variants, Applications, and
Training [9.299132423767992]
Generative Adversarial Networks (GAN) have gained considerable attention in the field of unsupervised learning.
Despite GAN's excellent success, there are still obstacles to stable training.
Herein, we survey several training solutions proposed by different researchers to stabilize GAN training.
arXiv Detail & Related papers (2020-06-09T09:04:41Z) - A Review on Generative Adversarial Networks: Algorithms, Theory, and
Applications [154.4832792036163]
Generative adversarial networks (GANs) are a hot research topic recently.
GANs have been widely studied since 2014, and a large number of algorithms have been proposed.
This paper provides a review on various GANs methods from the perspectives of algorithms, theory, and applications.
arXiv Detail & Related papers (2020-01-20T01:52:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.