Black-Box Diagnosis and Calibration on GAN Intra-Mode Collapse: A Pilot
Study
- URL: http://arxiv.org/abs/2107.12202v1
- Date: Fri, 23 Jul 2021 06:03:55 GMT
- Title: Black-Box Diagnosis and Calibration on GAN Intra-Mode Collapse: A Pilot
Study
- Authors: Zhenyu Wu, Zhaowen Wang, Ye Yuan, Jianming Zhang, Zhangyang Wang,
Hailin Jin
- Abstract summary: Generative adversarial networks (GANs) nowadays are capable of producing images of incredible realism.
One concern raised is whether the state-of-the-art GAN's learned distribution still suffers from mode collapse.
This paper explores to diagnose GAN intra-mode collapse and calibrate that, in a novel black-box setting.
- Score: 116.05514467222544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) nowadays are capable of producing
images of incredible realism. One concern raised is whether the
state-of-the-art GAN's learned distribution still suffers from mode collapse,
and what to do if so. Existing diversity tests of samples from GANs are usually
conducted qualitatively on a small scale, and/or depends on the access to
original training data as well as the trained model parameters. This paper
explores to diagnose GAN intra-mode collapse and calibrate that, in a novel
black-box setting: no access to training data, nor the trained model
parameters, is assumed. The new setting is practically demanded, yet rarely
explored and significantly more challenging. As a first stab, we devise a set
of statistical tools based on sampling, that can visualize, quantify, and
rectify intra-mode collapse. We demonstrate the effectiveness of our proposed
diagnosis and calibration techniques, via extensive simulations and
experiments, on unconditional GAN image generation (e.g., face and vehicle).
Our study reveals that the intra-mode collapse is still a prevailing problem in
state-of-the-art GANs and the mode collapse is diagnosable and calibratable in
black-box settings. Our codes are available at:
https://github.com/VITA-Group/BlackBoxGANCollapse.
Related papers
- Double Gradient Reversal Network for Single-Source Domain Generalization in Multi-mode Fault Diagnosis [1.9389881806157316]
Domain-invariant fault features from single-mode data for unseen mode fault diagnosis poses challenges.
Existing methods utilize a generator module to simulate samples of unseen modes.
Double gradient reversal network (DGRN) is proposed to achieve high classification accuracy on unseen modes.
arXiv Detail & Related papers (2024-07-19T02:06:41Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Extremely Simple Activation Shaping for Out-of-Distribution Detection [10.539058676970267]
Out-of-distribution (OOD) detection is an important area that stress-tests a model's ability to handle unseen situations.
Existing OOD detection methods either incur extra training steps, additional data or make nontrivial modifications to the trained network.
We propose an extremely simple, post-hoc, on-the-fly activation shaping method, ASH, where a large portion of a sample's activation at a late layer is removed.
Experiments show that such a simple treatment enhances in-distribution and out-of-distribution distinction so as to allow state-of-the-art OOD
arXiv Detail & Related papers (2022-09-20T17:09:49Z) - On-the-Fly Test-time Adaptation for Medical Image Segmentation [63.476899335138164]
Adapting the source model to target data distribution at test-time is an efficient solution for the data-shift problem.
We propose a new framework called Adaptive UNet where each convolutional block is equipped with an adaptive batch normalization layer.
During test-time, the model takes in just the new test image and generates a domain code to adapt the features of source model according to the test data.
arXiv Detail & Related papers (2022-03-10T18:51:29Z) - Collapse by Conditioning: Training Class-conditional GANs with Limited
Data [109.30895503994687]
We propose a training strategy for conditional GANs (cGANs) that effectively prevents the observed mode-collapse by leveraging unconditional learning.
Our training strategy starts with an unconditional GAN and gradually injects conditional information into the generator and the objective function.
The proposed method for training cGANs with limited data results not only in stable training but also in generating high-quality images.
arXiv Detail & Related papers (2022-01-17T18:59:23Z) - Dense Out-of-Distribution Detection by Robust Learning on Synthetic
Negative Data [1.7474352892977458]
We show how to detect out-of-distribution anomalies in road-driving scenes and remote sensing imagery.
We leverage a jointly trained normalizing flow due to coverage-oriented learning objective and the capability to generate samples at different resolutions.
The resulting models set the new state of the art on benchmarks for out-of-distribution detection in road-driving scenes and remote sensing imagery.
arXiv Detail & Related papers (2021-12-23T20:35:10Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Combating Mode Collapse in GAN training: An Empirical Analysis using
Hessian Eigenvalues [4.779196219827507]
Generative adversarial networks (GAN) provide state-of-the-art results in image generation.
Despite being so powerful, GAN still remain very challenging to train an algorithm that overcomes mode collapse.
We show that mode collapse is related to the convergence towards sharp minima.
In particular, we observe how the $G$ evalues are directly correlated with the occurrence of mode collapse.
arXiv Detail & Related papers (2020-12-17T15:40:27Z) - Detecting Rewards Deterioration in Episodic Reinforcement Learning [63.49923393311052]
In many RL applications, once training ends, it is vital to detect any deterioration in the agent performance as soon as possible.
We consider an episodic framework, where the rewards within each episode are not independent, nor identically-distributed, nor Markov.
We define the mean-shift in a way corresponding to deterioration of a temporal signal (such as the rewards), and derive a test for this problem with optimal statistical power.
arXiv Detail & Related papers (2020-10-22T12:45:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.