An Empirical Study on GANs with Margin Cosine Loss and Relativistic
Discriminator
- URL: http://arxiv.org/abs/2110.11293v2
- Date: Fri, 22 Oct 2021 02:28:47 GMT
- Title: An Empirical Study on GANs with Margin Cosine Loss and Relativistic
Discriminator
- Authors: Cuong V. Nguyen, Tien-Dung Cao, Tram Truong-Huu, Khanh N. Pham, Binh
T. Nguyen
- Abstract summary: We introduce a new loss function, namely Relativistic Margin Cosine Loss (RMCosGAN)
We compare RMCosGAN performance with existing loss functions based on two metrics: Frechet inception distance and inception score.
The experimental results show that RMCosGAN outperforms the existing ones and significantly improves the quality of images generated.
- Score: 4.899818550820575
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Generative Adversarial Networks (GANs) have emerged as useful generative
models, which are capable of implicitly learning data distributions of
arbitrarily complex dimensions. However, the training of GANs is empirically
well-known for being highly unstable and sensitive. The loss functions of both
the discriminator and generator concerning their parameters tend to oscillate
wildly during training. Different loss functions have been proposed to
stabilize the training and improve the quality of images generated. In this
paper, we perform an empirical study on the impact of several loss functions on
the performance of standard GAN models, Deep Convolutional Generative
Adversarial Networks (DCGANs). We introduce a new improvement that employs a
relativistic discriminator to replace the classical deterministic discriminator
in DCGANs and implement a margin cosine loss function for both the generator
and discriminator. This results in a novel loss function, namely Relativistic
Margin Cosine Loss (RMCosGAN). We carry out extensive experiments with four
datasets: CIFAR-$10$, MNIST, STL-$10$, and CAT. We compare RMCosGAN performance
with existing loss functions based on two metrics: Frechet inception distance
and inception score. The experimental results show that RMCosGAN outperforms
the existing ones and significantly improves the quality of images generated.
Related papers
- GANetic Loss for Generative Adversarial Networks with a Focus on Medical Applications [0.0]
Generative adversarial networks (GANs) are machine learning models that are used to estimate the underlying statistical structure of a given dataset.
Various loss functions have been proposed aiming to improve the performance and stability of the generative models.
In this study, loss function design for GANs is presented as an optimization problem solved using the genetic programming (GP) approach.
arXiv Detail & Related papers (2024-06-07T15:43:29Z) - MCGAN: Enhancing GAN Training with Regression-Based Generator Loss [5.7645234295847345]
An adversarial network (GAN) has emerged as a powerful tool for generating high-fidelity data.
We propose an algorithm called Monte Carlo GAN (MCGAN)
This approach, utilizing an innovative generative loss function, termly the regression loss, reformulates the generator training as a regression task.
We show that our method requires a weaker condition on the discriminator for effective generator training.
arXiv Detail & Related papers (2024-05-27T14:15:52Z) - Improving GANs with A Dynamic Discriminator [106.54552336711997]
We argue that a discriminator with an on-the-fly adjustment on its capacity can better accommodate such a time-varying task.
A comprehensive empirical study confirms that the proposed training strategy, termed as DynamicD, improves the synthesis performance without incurring any additional cost or training objectives.
arXiv Detail & Related papers (2022-09-20T17:57:33Z) - Augmentation-Aware Self-Supervision for Data-Efficient GAN Training [68.81471633374393]
Training generative adversarial networks (GANs) with limited data is challenging because the discriminator is prone to overfitting.
We propose a novel augmentation-aware self-supervised discriminator that predicts the augmentation parameter of the augmented data.
We compare our method with state-of-the-art (SOTA) methods using the class-conditional BigGAN and unconditional StyleGAN2 architectures.
arXiv Detail & Related papers (2022-05-31T10:35:55Z) - Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited
Data [125.7135706352493]
Generative adversarial networks (GANs) typically require ample data for training in order to synthesize high-fidelity images.
Recent studies have shown that training GANs with limited data remains formidable due to discriminator overfitting.
This paper introduces a novel strategy called Adaptive Pseudo Augmentation (APA) to encourage healthy competition between the generator and the discriminator.
arXiv Detail & Related papers (2021-11-12T18:13:45Z) - Adaptive Weighted Discriminator for Training Generative Adversarial
Networks [11.68198403603969]
We introduce a new family of discriminator loss functions that adopts a weighted sum of real and fake parts.
Our method can be potentially applied to any discriminator model with a loss that is a sum of the real and fake parts.
arXiv Detail & Related papers (2020-12-05T23:55:42Z) - $\sigma^2$R Loss: a Weighted Loss by Multiplicative Factors using
Sigmoidal Functions [0.9569316316728905]
We introduce a new loss function called squared reduction loss ($sigma2$R loss), which is regulated by a sigmoid function to inflate/deflate the error per instance.
Our loss has clear intuition and geometric interpretation, we demonstrate by experiments the effectiveness of our proposal.
arXiv Detail & Related papers (2020-09-18T12:34:40Z) - Least $k$th-Order and R\'{e}nyi Generative Adversarial Networks [12.13405065406781]
Experimental results indicate that the proposed loss functions, applied to the MNIST and CelebA datasets, confer performance benefits by virtue of the extra degrees of freedom provided by the parameters $k$ and $alpha$, respectively.
While it was applied to GANs in this study, the proposed approach is generic and can be used in other applications of information theory to deep learning, e.g., the issues of fairness or privacy in artificial intelligence.
arXiv Detail & Related papers (2020-06-03T18:44:05Z) - Feature Quantization Improves GAN Training [126.02828112121874]
Feature Quantization (FQ) for the discriminator embeds both true and fake data samples into a shared discrete space.
Our method can be easily plugged into existing GAN models, with little computational overhead in training.
arXiv Detail & Related papers (2020-04-05T04:06:50Z) - Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling
by Exploring Energy of the Discriminator [85.68825725223873]
Generative Adversarial Networks (GANs) have shown great promise in modeling high dimensional data.
We introduce the Discriminator Contrastive Divergence, which is well motivated by the property of WGAN's discriminator.
We demonstrate the benefits of significant improved generation on both synthetic data and several real-world image generation benchmarks.
arXiv Detail & Related papers (2020-04-05T01:50:16Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.