Stop Throwing Away Discriminators! Re-using Adversaries for Test-Time
Training
- URL: http://arxiv.org/abs/2108.12280v1
- Date: Thu, 26 Aug 2021 16:51:28 GMT
- Title: Stop Throwing Away Discriminators! Re-using Adversaries for Test-Time
Training
- Authors: Gabriele Valvano, Andrea Leo, Sotirios A. Tsaftaris
- Abstract summary: We argue that the life cycle of adversarial discriminators should not end after training.
We develop stable mask discriminators that do not overfit or catastrophically forget.
Our method is simple to implement and increases model performance.
- Score: 10.647970046084916
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Thanks to their ability to learn data distributions without requiring paired
data, Generative Adversarial Networks (GANs) have become an integral part of
many computer vision methods, including those developed for medical image
segmentation. These methods jointly train a segmentor and an adversarial mask
discriminator, which provides a data-driven shape prior. At inference, the
discriminator is discarded, and only the segmentor is used to predict label
maps on test images. But should we discard the discriminator? Here, we argue
that the life cycle of adversarial discriminators should not end after
training. On the contrary, training stable GANs produces powerful shape priors
that we can use to correct segmentor mistakes at inference. To achieve this, we
develop stable mask discriminators that do not overfit or catastrophically
forget. At test time, we fine-tune the segmentor on each individual test
instance until it satisfies the learned shape prior. Our method is simple to
implement and increases model performance. Moreover, it opens new directions
for re-using mask discriminators at inference. We release the code used for the
experiments at https://vios-s.github.io/adversarial-test-time-training.
Related papers
- Dynamically Masked Discriminator for Generative Adversarial Networks [71.33631511762782]
Training Generative Adversarial Networks (GANs) remains a challenging problem.
Discriminator trains the generator by learning the distribution of real/generated data.
We propose a novel method for GANs from the viewpoint of online continual learning.
arXiv Detail & Related papers (2023-06-13T12:07:01Z) - Refining Generative Process with Discriminator Guidance in Score-based
Diffusion Models [15.571673352656264]
Discriminator Guidance aims to improve sample generation of pre-trained diffusion models.
Unlike GANs, our approach does not require joint training of score and discriminator networks.
We achive state-of-the-art results on ImageNet 256x256 with FID 1.83 and recall 0.64, similar to the validation data's FID (1.68) and recall (0.66).
arXiv Detail & Related papers (2022-11-28T20:04:12Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Re-using Adversarial Mask Discriminators for Test-time Training under
Distribution Shifts [10.647970046084916]
We argue that training stable discriminators produces expressive loss functions that we can re-use at inference to detect and correct segmentation mistakes.
We show that we can combine discriminators with image reconstruction costs (via decoders) to further improve the model.
Our method is simple and improves the test-time performance of pre-trained GANs.
arXiv Detail & Related papers (2021-08-26T17:31:46Z) - Novelty Detection via Contrastive Learning with Negative Data
Augmentation [34.39521195691397]
We introduce a novel generative network framework for novelty detection.
Our model has significant superiority over cutting-edge novelty detectors.
Our model is more stable for training in a non-adversarial manner, compared to other adversarial based novelty detection methods.
arXiv Detail & Related papers (2021-06-18T07:26:15Z) - Data-Efficient Instance Generation from Instance Discrimination [40.71055888512495]
We propose a data-efficient Instance Generation (InsGen) method based on instance discrimination.
In this work, we propose a data-efficient Instance Generation (InsGen) method based on instance discrimination.
arXiv Detail & Related papers (2021-06-08T17:52:59Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Training GANs with Stronger Augmentations via Contrastive Discriminator [80.8216679195]
We introduce a contrastive representation learning scheme into the GAN discriminator, coined ContraD.
This "fusion" enables the discriminators to work with much stronger augmentations without increasing their training instability.
Our experimental results show that GANs with ContraD consistently improve FID and IS compared to other recent techniques incorporating data augmentations.
arXiv Detail & Related papers (2021-03-17T16:04:54Z) - TopoAL: An Adversarial Learning Approach for Topology-Aware Road
Segmentation [56.353558147044]
We introduce an Adversarial Learning (AL) strategy tailored for our purposes.
We use a more sophisticated discriminator that returns a label pyramid describing what portions of the road network are correct.
We will show that it outperforms state-of-the-art ones on the challenging RoadTracer dataset.
arXiv Detail & Related papers (2020-07-17T16:06:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.