An Empirical Study of the Effects of Sample-Mixing Methods for Efficient
Training of Generative Adversarial Networks
- URL: http://arxiv.org/abs/2104.03535v1
- Date: Thu, 8 Apr 2021 06:40:23 GMT
- Title: An Empirical Study of the Effects of Sample-Mixing Methods for Efficient
Training of Generative Adversarial Networks
- Authors: Makoto Takamoto and Yusuke Morishita
- Abstract summary: It is well-known that training of generative adversarial networks (GANs) requires huge iterations before the generator's providing good-quality samples.
We investigated the effect of sample mixing methods, that is, Mixup, CutMix, and SRMix, to alleviate this problem.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: It is well-known that training of generative adversarial networks (GANs)
requires huge iterations before the generator's providing good-quality samples.
Although there are several studies to tackle this problem, there is still no
universal solution. In this paper, we investigated the effect of sample mixing
methods, that is, Mixup, CutMix, and newly proposed Smoothed Regional Mix
(SRMix), to alleviate this problem. The sample-mixing methods are known to
enhance the accuracy and robustness in the wide range of classification
problems, and can naturally be applicable to GANs because the role of the
discriminator can be interpreted as the classification between real and fake
samples. We also proposed a new formalism applying the sample-mixing methods to
GANs with the saturated losses which do not have a clear "label" of real and
fake. We performed a vast amount of numerical experiments using LSUN and CelebA
datasets. The results showed that Mixup and SRMix improved the quality of the
generated images in terms of FID in most cases, in particular, SRMix showed the
best improvement in most cases. Our analysis indicates that the mixed-samples
can provide different properties from the vanilla fake samples, and the mixing
pattern strongly affects the decision of the discriminators. The generated
images of Mixup have good high-level feature but low-level feature is not so
impressible. On the other hand, CutMix showed the opposite tendency. Our SRMix
showed the middle tendency, that is, showed good high and low level features.
We believe that our finding provides a new perspective to accelerate the GANs
convergence and improve the quality of generated samples.
Related papers
- SUMix: Mixup with Semantic and Uncertain Information [41.99721365685618]
Mixup data augmentation approaches have been applied for various tasks of deep learning.
We propose a novel approach named SUMix to learn the mixing ratio as well as the uncertainty for the mixed samples during the training process.
arXiv Detail & Related papers (2024-07-10T16:25:26Z) - AMPLIFY:Attention-based Mixup for Performance Improvement and Label Smoothing in Transformer [2.3072402651280517]
AMPLIFY uses the Attention mechanism of Transformer itself to reduce the influence of noises and aberrant values in the original samples on the prediction results.
The experimental results show that, under a smaller computational resource cost, AMPLIFY outperforms other Mixup methods in text classification tasks.
arXiv Detail & Related papers (2023-09-22T08:02:45Z) - Reweighted Mixup for Subpopulation Shift [63.1315456651771]
Subpopulation shift exists in many real-world applications, which refers to the training and test distributions that contain the same subpopulation groups but with different subpopulation proportions.
Importance reweighting is a classical and effective way to handle the subpopulation shift.
We propose a simple yet practical framework, called reweighted mixup, to mitigate the overfitting issue.
arXiv Detail & Related papers (2023-04-09T03:44:50Z) - RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and
Out Distribution Robustness [94.69774317059122]
We show that the effectiveness of the well celebrated Mixup can be further improved if instead of using it as the sole learning objective, it is utilized as an additional regularizer to the standard cross-entropy loss.
This simple change not only provides much improved accuracy but also significantly improves the quality of the predictive uncertainty estimation of Mixup.
arXiv Detail & Related papers (2022-06-29T09:44:33Z) - ReSmooth: Detecting and Utilizing OOD Samples when Training with Data
Augmentation [57.38418881020046]
Recent DA techniques always meet the need for diversity in augmented training samples.
An augmentation strategy that has a high diversity usually introduces out-of-distribution (OOD) augmented samples.
We propose ReSmooth, a framework that firstly detects OOD samples in augmented samples and then leverages them.
arXiv Detail & Related papers (2022-05-25T09:29:27Z) - Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated
Label Mixing [104.630875328668]
Mixup scheme suggests mixing a pair of samples to create an augmented training sample.
We present a novel, yet simple Mixup-variant that captures the best of both worlds.
arXiv Detail & Related papers (2021-12-16T11:27:48Z) - Thompson Sampling with a Mixture Prior [59.211830005673896]
We study Thompson sampling (TS) in online decision-making problems where the uncertain environment is sampled from a mixture distribution.
We develop a novel, general technique for analyzing the regret of TS with such priors.
arXiv Detail & Related papers (2021-06-10T09:21:07Z) - ReMix: Towards Image-to-Image Translation with Limited Data [154.71724970593036]
We propose a data augmentation method (ReMix) to tackle this issue.
We interpolate training samples at the feature level and propose a novel content loss based on the perceptual relations among samples.
The proposed approach effectively reduces the ambiguity of generation and renders content-preserving results.
arXiv Detail & Related papers (2021-03-31T06:24:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.