Re-using Adversarial Mask Discriminators for Test-time Training under
Distribution Shifts
- URL: http://arxiv.org/abs/2108.11926v1
- Date: Thu, 26 Aug 2021 17:31:46 GMT
- Title: Re-using Adversarial Mask Discriminators for Test-time Training under
Distribution Shifts
- Authors: Gabriele Valvano, Andrea Leo, Sotirios A. Tsaftaris
- Abstract summary: We argue that training stable discriminators produces expressive loss functions that we can re-use at inference to detect and correct segmentation mistakes.
We show that we can combine discriminators with image reconstruction costs (via decoders) to further improve the model.
Our method is simple and improves the test-time performance of pre-trained GANs.
- Score: 10.647970046084916
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Thanks to their ability to learn flexible data-driven losses, Generative
Adversarial Networks (GANs) are an integral part of many semi- and
weakly-supervised methods for medical image segmentation. GANs jointly optimise
a generator and an adversarial discriminator on a set of training data. After
training has completed, the discriminator is usually discarded and only the
generator is used for inference. But should we discard discriminators? In this
work, we argue that training stable discriminators produces expressive loss
functions that we can re-use at inference to detect and correct segmentation
mistakes. First, we identify key challenges and suggest possible solutions to
make discriminators re-usable at inference. Then, we show that we can combine
discriminators with image reconstruction costs (via decoders) to further
improve the model. Our method is simple and improves the test-time performance
of pre-trained GANs. Moreover, we show that it is compatible with standard
post-processing techniques and it has potentials to be used for Online
Continual Learning. With our work, we open new research avenues for re-using
adversarial discriminators at inference.
Related papers
- Dynamically Masked Discriminator for Generative Adversarial Networks [71.33631511762782]
Training Generative Adversarial Networks (GANs) remains a challenging problem.
Discriminator trains the generator by learning the distribution of real/generated data.
We propose a novel method for GANs from the viewpoint of online continual learning.
arXiv Detail & Related papers (2023-06-13T12:07:01Z) - Private GANs, Revisited [16.570354461039603]
We show that the canonical approach for training differentially private GANs can yield significantly improved results after modifications to training.
We show that a simple fix -- taking more discriminator steps between generator steps -- restores parity between the generator and discriminator and improves results.
arXiv Detail & Related papers (2023-02-06T17:11:09Z) - Augmentation-Aware Self-Supervision for Data-Efficient GAN Training [68.81471633374393]
Training generative adversarial networks (GANs) with limited data is challenging because the discriminator is prone to overfitting.
We propose a novel augmentation-aware self-supervised discriminator that predicts the augmentation parameter of the augmented data.
We compare our method with state-of-the-art (SOTA) methods using the class-conditional BigGAN and unconditional StyleGAN2 architectures.
arXiv Detail & Related papers (2022-05-31T10:35:55Z) - Reusing the Task-specific Classifier as a Discriminator:
Discriminator-free Adversarial Domain Adaptation [55.27563366506407]
We introduce a discriminator-free adversarial learning network (DALN) for unsupervised domain adaptation (UDA)
DALN achieves explicit domain alignment and category distinguishment through a unified objective.
DALN compares favorably against the existing state-of-the-art (SOTA) methods on a variety of public datasets.
arXiv Detail & Related papers (2022-04-08T04:40:18Z) - Stop Throwing Away Discriminators! Re-using Adversaries for Test-Time
Training [10.647970046084916]
We argue that the life cycle of adversarial discriminators should not end after training.
We develop stable mask discriminators that do not overfit or catastrophically forget.
Our method is simple to implement and increases model performance.
arXiv Detail & Related papers (2021-08-26T16:51:28Z) - MCL-GAN: Generative Adversarial Networks with Multiple Specialized Discriminators [47.19216713803009]
We propose a framework of generative adversarial networks with multiple discriminators.
We guide each discriminator to have expertise in a subset of the entire data.
Despite the use of multiple discriminators, the backbone networks are shared across the discriminators.
arXiv Detail & Related papers (2021-07-15T11:35:08Z) - Data-Efficient Instance Generation from Instance Discrimination [40.71055888512495]
We propose a data-efficient Instance Generation (InsGen) method based on instance discrimination.
In this work, we propose a data-efficient Instance Generation (InsGen) method based on instance discrimination.
arXiv Detail & Related papers (2021-06-08T17:52:59Z) - Training GANs with Stronger Augmentations via Contrastive Discriminator [80.8216679195]
We introduce a contrastive representation learning scheme into the GAN discriminator, coined ContraD.
This "fusion" enables the discriminators to work with much stronger augmentations without increasing their training instability.
Our experimental results show that GANs with ContraD consistently improve FID and IS compared to other recent techniques incorporating data augmentations.
arXiv Detail & Related papers (2021-03-17T16:04:54Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.