Stabilization of generative adversarial networks via noisy scale-space
- URL: http://arxiv.org/abs/2105.00220v2
- Date: Tue, 4 May 2021 01:12:19 GMT
- Title: Stabilization of generative adversarial networks via noisy scale-space
- Authors: Kensuke Nakamura and Simon Korman and Byung-Woo Hong
- Abstract summary: Generative adversarial networks (GAN) is a framework for generating fake data based on given reals.
In order to stabilize GANs, the noise enlarges the overlap of the real and fake distributions.
The data smoothing may reduce the dimensionality of data but suppresses the capability of GANs to learn high-frequency information.
- Score: 6.574517227976925
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative adversarial networks (GAN) is a framework for generating fake data
based on given reals but is unstable in the optimization. In order to stabilize
GANs, the noise enlarges the overlap of the real and fake distributions at the
cost of significant variance. The data smoothing may reduce the dimensionality
of data but suppresses the capability of GANs to learn high-frequency
information. Based on these observations, we propose a data representation for
GANs, called noisy scale-space, that recursively applies the smoothing with
noise to data in order to preserve the data variance while replacing
high-frequency information by random data, leading to a coarse-to-fine training
of GANs. We also present a synthetic data-set using the Hadamard bases that
enables us to visualize the true distribution of data. We experiment with a
DCGAN with the noise scale-space (NSS-GAN) using major data-sets in which
NSS-GAN overtook state-of-the-arts in most cases independent of the image
content.
Related papers
- Scaling-based Data Augmentation for Generative Models and its Theoretical Extension [2.449909275410288]
We study stable learning methods for generative models that enable high-quality data generation.
Data scaling is a key component for stable learning and high-quality data generation.
We propose a learning algorithm, Scale-GAN, that uses data scaling and variance-based regularization.
arXiv Detail & Related papers (2024-10-28T06:41:19Z) - Reduced Effectiveness of Kolmogorov-Arnold Networks on Functions with Noise [9.492965765929963]
Noise in a dataset can significantly degrade the performance of Kolmogorov-Arnold networks.
We propose an oversampling technique combined with denoising to alleviate the impact of noise.
We conclude that applying both oversampling and filtering strategies can reduce the detrimental effects of noise.
arXiv Detail & Related papers (2024-07-20T14:17:10Z) - SEMRes-DDPM: Residual Network Based Diffusion Modelling Applied to
Imbalanced Data [9.969882349165745]
In the field of data mining and machine learning, commonly used classification models cannot effectively learn in unbalanced data.
Most of the classical oversampling methods are based on the SMOTE technique, which only focuses on the local information of the data.
We propose a novel oversampling method SEMRes-DDPM.
arXiv Detail & Related papers (2024-03-09T14:01:04Z) - SMaRt: Improving GANs with Score Matching Regularity [94.81046452865583]
Generative adversarial networks (GANs) usually struggle in learning from highly diverse data, whose underlying manifold is complex.
We show that score matching serves as a promising solution to this issue thanks to its capability of persistently pushing the generated data points towards the real data manifold.
We propose to improve the optimization of GANs with score matching regularity (SMaRt)
arXiv Detail & Related papers (2023-11-30T03:05:14Z) - LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - CFNet: Conditional Filter Learning with Dynamic Noise Estimation for
Real Image Denoising [37.29552796977652]
This paper considers real noise approximated by heteroscedastic Gaussian/Poisson Gaussian distributions with in-camera signal processing pipelines.
We propose a novel conditional filter in which the optimal kernels for different feature positions can be adaptively inferred by local features from the image and the noise map.
Also, we bring the thought that alternatively performs noise estimation and non-blind denoising into CNN structure, which continuously updates noise prior to guide the iterative feature denoising.
arXiv Detail & Related papers (2022-11-26T14:28:54Z) - Diffusion-GAN: Training GANs with Diffusion [135.24433011977874]
Generative adversarial networks (GANs) are challenging to train stably.
We propose Diffusion-GAN, a novel GAN framework that leverages a forward diffusion chain to generate instance noise.
We show that Diffusion-GAN can produce more realistic images with higher stability and data efficiency than state-of-the-art GANs.
arXiv Detail & Related papers (2022-06-05T20:45:01Z) - GANs for learning from very high class conditional noisy labels [1.6516902135723865]
We use Generative Adversarial Networks (GANs) to design a class conditional label noise (CCN) robust scheme for binary classification.
It first generates a set of correctly labelled data points from noisy labelled data and 0.1% or 1% clean labels.
arXiv Detail & Related papers (2020-10-19T15:01:11Z) - Improving Generative Adversarial Networks with Local Coordinate Coding [150.24880482480455]
Generative adversarial networks (GANs) have shown remarkable success in generating realistic data from some predefined prior distribution.
In practice, semantic information might be represented by some latent distribution learned from data.
We propose an LCCGAN model with local coordinate coding (LCC) to improve the performance of generating data.
arXiv Detail & Related papers (2020-07-28T09:17:50Z) - RDP-GAN: A R\'enyi-Differential Privacy based Generative Adversarial
Network [75.81653258081435]
Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.
However, when GANs are applied on sensitive or private training examples, such as medical or financial records, it is still probable to divulge individuals' sensitive and private information.
We propose a R'enyi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully adding random noises on the value of the loss function during training.
arXiv Detail & Related papers (2020-07-04T09:51:02Z) - Distribution Approximation and Statistical Estimation Guarantees of
Generative Adversarial Networks [82.61546580149427]
Generative Adversarial Networks (GANs) have achieved a great success in unsupervised learning.
This paper provides approximation and statistical guarantees of GANs for the estimation of data distributions with densities in a H"older space.
arXiv Detail & Related papers (2020-02-10T16:47:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.