When Relation Networks meet GANs: Relation GANs with Triplet Loss
- URL: http://arxiv.org/abs/2002.10174v3
- Date: Tue, 17 Mar 2020 03:28:57 GMT
- Title: When Relation Networks meet GANs: Relation GANs with Triplet Loss
- Authors: Runmin Wu, Kunyao Zhang, Lijun Wang, Yue Wang, Pingping Zhang, Huchuan
Lu, Yizhou Yu
- Abstract summary: Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
- Score: 110.7572918636599
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Though recent research has achieved remarkable progress in generating
realistic images with generative adversarial networks (GANs), the lack of
training stability is still a lingering concern of most GANs, especially on
high-resolution inputs and complex datasets. Since the randomly generated
distribution can hardly overlap with the real distribution, training GANs often
suffers from the gradient vanishing problem. A number of approaches have been
proposed to address this issue by constraining the discriminator's capabilities
using empirical techniques, like weight clipping, gradient penalty, spectral
normalization etc. In this paper, we provide a more principled approach as an
alternative solution to this issue. Instead of training the discriminator to
distinguish real and fake input samples, we investigate the relationship
between paired samples by training the discriminator to separate paired samples
from the same distribution and those from different distributions. To this end,
we explore a relation network architecture for the discriminator and design a
triplet loss which performs better generalization and stability. Extensive
experiments on benchmark datasets show that the proposed relation discriminator
and new loss can provide significant improvement on variable vision tasks
including unconditional and conditional image generation and image translation.
Related papers
- Understanding normalization in contrastive representation learning and out-of-distribution detection [0.0]
We propose a simple method based on contrastive learning, which incorporates out-of-distribution data by discriminating against normal samples in the contrastive layer space.
Our approach can be applied flexibly as an outlier exposure (OE) approach, or as a fully self-supervised learning approach.
The high-quality features learned through contrastive learning consistently enhance performance in OE scenarios, even when the available out-of-distribution dataset is not diverse enough.
arXiv Detail & Related papers (2023-12-23T16:05:47Z) - Tackling Diverse Minorities in Imbalanced Classification [80.78227787608714]
Imbalanced datasets are commonly observed in various real-world applications, presenting significant challenges in training classifiers.
We propose generating synthetic samples iteratively by mixing data samples from both minority and majority classes.
We demonstrate the effectiveness of our proposed framework through extensive experiments conducted on seven publicly available benchmark datasets.
arXiv Detail & Related papers (2023-08-28T18:48:34Z) - Heterogeneous Target Speech Separation [52.05046029743995]
We introduce a new paradigm for single-channel target source separation where the sources of interest can be distinguished using non-mutually exclusive concepts.
Our proposed heterogeneous separation framework can seamlessly leverage datasets with large distribution shifts.
arXiv Detail & Related papers (2022-04-07T17:14:20Z) - Investigating Shifts in GAN Output-Distributions [5.076419064097734]
We introduce a loop-training scheme for the systematic investigation of observable shifts between the distributions of real training data and GAN generated data.
Overall, the combination of these methods allows an explorative investigation of innate limitations of current GAN algorithms.
arXiv Detail & Related papers (2021-12-28T09:16:55Z) - Decentralized Local Stochastic Extra-Gradient for Variational
Inequalities [125.62877849447729]
We consider distributed variational inequalities (VIs) on domains with the problem data that is heterogeneous (non-IID) and distributed across many devices.
We make a very general assumption on the computational network that covers the settings of fully decentralized calculations.
We theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone settings.
arXiv Detail & Related papers (2021-06-15T17:45:51Z) - Lessons Learned from the Training of GANs on Artificial Datasets [0.0]
Generative Adversarial Networks (GANs) have made great progress in synthesizing realistic images in recent years.
GANs are prone to underfitting or overfitting, making the analysis of them difficult and constrained.
We train them on artificial datasets where there are infinitely many samples and the real data distributions are simple.
We find that training mixtures of GANs leads to more performance gain compared to increasing the network depth or width.
arXiv Detail & Related papers (2020-07-13T14:51:02Z) - Robust Federated Learning: The Case of Affine Distribution Shifts [41.27887358989414]
We develop a robust federated learning algorithm that achieves satisfactory performance against distribution shifts in users' samples.
We show that an affine distribution shift indeed suffices to significantly decrease the performance of the learnt classifier in a new test user.
arXiv Detail & Related papers (2020-06-16T03:43:59Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z) - Feature Quantization Improves GAN Training [126.02828112121874]
Feature Quantization (FQ) for the discriminator embeds both true and fake data samples into a shared discrete space.
Our method can be easily plugged into existing GAN models, with little computational overhead in training.
arXiv Detail & Related papers (2020-04-05T04:06:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.