A Systematic Survey of Regularization and Normalization in GANs
- URL: http://arxiv.org/abs/2008.08930v6
- Date: Sat, 8 Oct 2022 14:49:54 GMT
- Title: A Systematic Survey of Regularization and Normalization in GANs
- Authors: Ziqiang Li, Muhammad Usman, Rentuo Tao, Pengfei Xia, Huanhuan Chen,
Bin Li
- Abstract summary: Generative Adversarial Networks (GANs) have been widely applied in different scenarios thanks to the development of deep neural networks.
It is still unknown whether GANs can fit the target distribution without any prior information.
Regularization and normalization are common methods of introducing prior information to stabilize training and improve discrimination.
- Score: 25.188671290175208
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Adversarial Networks (GANs) have been widely applied in different
scenarios thanks to the development of deep neural networks. The original GAN
was proposed based on the non-parametric assumption of the infinite capacity of
networks. However, it is still unknown whether GANs can fit the target
distribution without any prior information. Due to the overconfident
assumption, many issues remain unaddressed in GANs' training, such as
non-convergence, mode collapses, gradient vanishing. Regularization and
normalization are common methods of introducing prior information to stabilize
training and improve discrimination. Although a handful number of
regularization and normalization methods have been proposed for GANs, to the
best of our knowledge, there exists no comprehensive survey which primarily
focuses on objectives and development of these methods, apart from some
in-comprehensive and limited scope studies. In this work, we conduct a
comprehensive survey on the regularization and normalization techniques from
different perspectives of GANs training. First, we systematically describe
different perspectives of GANs training and thus obtain the different
objectives of regularization and normalization. Based on these objectives, we
propose a new taxonomy. Furthermore, we compare the performance of the
mainstream methods on different datasets and investigate the applications of
regularization and normalization techniques that have been frequently employed
in state-of-the-art GANs. Finally, we highlight potential future directions of
research in this domain. Code and studies related to the regularization and
normalization of GANs in this work is summarized on
https://github.com/iceli1007/GANs-Regularization-Review.
Related papers
- NormAUG: Normalization-guided Augmentation for Domain Generalization [60.159546669021346]
We propose a simple yet effective method called NormAUG (Normalization-guided Augmentation) for deep learning.
Our method introduces diverse information at the feature level and improves the generalization of the main path.
In the test stage, we leverage an ensemble strategy to combine the predictions from the auxiliary path of our model, further boosting performance.
arXiv Detail & Related papers (2023-07-25T13:35:45Z) - TANGOS: Regularizing Tabular Neural Networks through Gradient
Orthogonalization and Specialization [69.80141512683254]
We introduce Tabular Neural Gradient Orthogonalization and gradient (TANGOS)
TANGOS is a novel framework for regularization in the tabular setting built on latent unit attributions.
We demonstrate that our approach can lead to improved out-of-sample generalization performance, outperforming other popular regularization methods.
arXiv Detail & Related papers (2023-03-09T18:57:13Z) - Adversarially Adaptive Normalization for Single Domain Generalization [71.80587939738672]
We propose a generic normalization approach, adaptive standardization and rescaling normalization (ASR-Norm)
ASR-Norm learns both the standardization and rescaling statistics via neural networks.
We show that ASR-Norm can bring consistent improvement to the state-of-the-art ADA approaches.
arXiv Detail & Related papers (2021-06-01T23:58:23Z) - Sparsity Aware Normalization for GANs [32.76828505875087]
Generative adversarial networks (GANs) are known to benefit from regularization or normalization of their critic (discriminator) network during training.
In this paper, we analyze the popular spectral normalization scheme, find a significant drawback and introduce sparsity aware normalization (SAN), a new alternative approach for stabilizing GAN training.
arXiv Detail & Related papers (2021-03-03T15:05:18Z) - Normalization Techniques in Training DNNs: Methodology, Analysis and
Application [111.82265258916397]
Normalization techniques are essential for accelerating the training and improving the generalization of deep neural networks (DNNs)
This paper reviews and comments on the past, present and future of normalization methods in the context of training.
arXiv Detail & Related papers (2020-09-27T13:06:52Z) - Dynamics Generalization via Information Bottleneck in Deep Reinforcement
Learning [90.93035276307239]
We propose an information theoretic regularization objective and an annealing-based optimization method to achieve better generalization ability in RL agents.
We demonstrate the extreme generalization benefits of our approach in different domains ranging from maze navigation to robotic tasks.
This work provides a principled way to improve generalization in RL by gradually removing information that is redundant for task-solving.
arXiv Detail & Related papers (2020-08-03T02:24:20Z) - On Connections between Regularizations for Improving DNN Robustness [67.28077776415724]
This paper analyzes regularization terms proposed recently for improving the adversarial robustness of deep neural networks (DNNs)
We study possible connections between several effective methods, including input-gradient regularization, Jacobian regularization, curvature regularization, and a cross-Lipschitz functional.
arXiv Detail & Related papers (2020-07-04T23:43:32Z) - Regularization Methods for Generative Adversarial Networks: An Overview
of Recent Studies [3.829070379776576]
Generative Adversarial Network (GAN) has been extensively studied and used for various tasks.
Regularization methods have been proposed to make the training of GAN stable.
arXiv Detail & Related papers (2020-05-19T01:59:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.