The Benefits of Pairwise Discriminators for Adversarial Training
- URL: http://arxiv.org/abs/2002.08621v1
- Date: Thu, 20 Feb 2020 08:43:59 GMT
- Title: The Benefits of Pairwise Discriminators for Adversarial Training
- Authors: Shangyuan Tong, Timur Garipov, Tommi Jaakkola
- Abstract summary: We introduce a family of objectives by leveraging pairwise discriminators, and show that only the generator needs to converge.
We provide sufficient conditions for local convergence; characterize the capacity balance that should guide the discriminator and generator choices.
We show that practical methods derived from our approach can better generate higher-resolution images.
- Score: 1.7188280334580193
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial training methods typically align distributions by solving
two-player games. However, in most current formulations, even if the generator
aligns perfectly with data, a sub-optimal discriminator can still drive the two
apart. Absent additional regularization, the instability can manifest itself as
a never-ending game. In this paper, we introduce a family of objectives by
leveraging pairwise discriminators, and show that only the generator needs to
converge. The alignment, if achieved, would be preserved with any
discriminator. We provide sufficient conditions for local convergence;
characterize the capacity balance that should guide the discriminator and
generator choices; and construct examples of minimally sufficient
discriminators. Empirically, we illustrate the theory and the effectiveness of
our approach on synthetic examples. Moreover, we show that practical methods
derived from our approach can better generate higher-resolution images.
Related papers
- Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - Reusing the Task-specific Classifier as a Discriminator:
Discriminator-free Adversarial Domain Adaptation [55.27563366506407]
We introduce a discriminator-free adversarial learning network (DALN) for unsupervised domain adaptation (UDA)
DALN achieves explicit domain alignment and category distinguishment through a unified objective.
DALN compares favorably against the existing state-of-the-art (SOTA) methods on a variety of public datasets.
arXiv Detail & Related papers (2022-04-08T04:40:18Z) - ELECRec: Training Sequential Recommenders as Discriminators [94.93227906678285]
Sequential recommendation is often considered as a generative task, i.e., training a sequential encoder to generate the next item of a user's interests.
We propose to train the sequential recommenders as discriminators rather than generators.
Our method trains a discriminator to distinguish if a sampled item is a'real' target item or not.
arXiv Detail & Related papers (2022-04-05T06:19:45Z) - Re-using Adversarial Mask Discriminators for Test-time Training under
Distribution Shifts [10.647970046084916]
We argue that training stable discriminators produces expressive loss functions that we can re-use at inference to detect and correct segmentation mistakes.
We show that we can combine discriminators with image reconstruction costs (via decoders) to further improve the model.
Our method is simple and improves the test-time performance of pre-trained GANs.
arXiv Detail & Related papers (2021-08-26T17:31:46Z) - Exploring Dropout Discriminator for Domain Adaptation [27.19677042654432]
Adaptation of a classifier to new domains is one of the challenging problems in machine learning.
We propose a curriculum based dropout discriminator that gradually increases the variance of the sample based distribution.
An ensemble of discriminators helps the model to learn the data distribution efficiently.
arXiv Detail & Related papers (2021-07-09T06:11:34Z) - Hybrid Generative-Contrastive Representation Learning [32.84066504783469]
We show that a transformer-based encoder-decoder architecture trained with both contrastive and generative losses can learn highly discriminative and robust representations without hurting the generative performance.
arXiv Detail & Related papers (2021-06-11T04:23:48Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Training GANs with Stronger Augmentations via Contrastive Discriminator [80.8216679195]
We introduce a contrastive representation learning scheme into the GAN discriminator, coined ContraD.
This "fusion" enables the discriminators to work with much stronger augmentations without increasing their training instability.
Our experimental results show that GANs with ContraD consistently improve FID and IS compared to other recent techniques incorporating data augmentations.
arXiv Detail & Related papers (2021-03-17T16:04:54Z) - One-vs.-One Mitigation of Intersectional Bias: A General Method to
Extend Fairness-Aware Binary Classification [0.48733623015338234]
One-vs.-One Mitigation is a process of comparison between each pair of subgroups related to sensitive attributes to the fairness-aware machine learning for binary classification.
Our method mitigates the intersectional bias much better than conventional methods in all the settings.
arXiv Detail & Related papers (2020-10-26T11:35:39Z) - Adversarial Soft Advantage Fitting: Imitation Learning without Policy
Optimization [48.674944885529165]
Adversarial Imitation Learning alternates between learning a discriminator -- which tells apart expert's demonstrations from generated ones -- and a generator's policy to produce trajectories that can fool this discriminator.
We propose to remove the burden of the policy optimization steps by leveraging a novel discriminator formulation.
arXiv Detail & Related papers (2020-06-23T18:29:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.