UQGAN: A Unified Model for Uncertainty Quantification of Deep
Classifiers trained via Conditional GANs
- URL: http://arxiv.org/abs/2201.13279v1
- Date: Mon, 31 Jan 2022 14:42:35 GMT
- Title: UQGAN: A Unified Model for Uncertainty Quantification of Deep
Classifiers trained via Conditional GANs
- Authors: Philipp Oberdiek, Gernot A. Fink, Matthias Rottmann
- Abstract summary: We present an approach to quantifying uncertainty for deep neural networks in image classification, based on generative adversarial networks (GANs)
Instead of shielding the entire in-distribution data with GAN generated OoD examples, we shield each class separately with out-of-class examples generated by a conditional GAN.
In particular, we improve over the OoD detection and FP detection performance of state-of-the-art GAN-training based classifiers.
- Score: 9.496524884855559
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an approach to quantifying both aleatoric and epistemic
uncertainty for deep neural networks in image classification, based on
generative adversarial networks (GANs). While most works in the literature that
use GANs to generate out-of-distribution (OoD) examples only focus on the
evaluation of OoD detection, we present a GAN based approach to learn a
classifier that exhibits proper uncertainties for OoD examples as well as for
false positives (FPs). Instead of shielding the entire in-distribution data
with GAN generated OoD examples which is state-of-the-art, we shield each class
separately with out-of-class examples generated by a conditional GAN and
complement this with a one-vs-all image classifier. In our experiments, in
particular on CIFAR10, we improve over the OoD detection and FP detection
performance of state-of-the-art GAN-training based classifiers. Furthermore, we
also find that the generated GAN examples do not significantly affect the
calibration error of our classifier and result in a significant gain in model
accuracy.
Related papers
- DuDGAN: Improving Class-Conditional GANs via Dual-Diffusion [2.458437232470188]
Class-conditional image generation using generative adversarial networks (GANs) has been investigated through various techniques.
We propose a novel approach for class-conditional image generation using GANs called DuDGAN, which incorporates a dual diffusion-based noise injection process.
Our method outperforms state-of-the-art conditional GAN models for image generation in terms of performance.
arXiv Detail & Related papers (2023-05-24T07:59:44Z) - Variational Classification [51.2541371924591]
We derive a variational objective to train the model, analogous to the evidence lower bound (ELBO) used to train variational auto-encoders.
Treating inputs to the softmax layer as samples of a latent variable, our abstracted perspective reveals a potential inconsistency.
We induce a chosen latent distribution, instead of the implicit assumption found in a standard softmax layer.
arXiv Detail & Related papers (2023-05-17T17:47:19Z) - Generative Model Based Noise Robust Training for Unsupervised Domain
Adaptation [108.11783463263328]
This paper proposes a Generative model-based Noise-Robust Training method (GeNRT)
It eliminates domain shift while mitigating label noise.
Experiments on Office-Home, PACS, and Digit-Five show that our GeNRT achieves comparable performance to state-of-the-art methods.
arXiv Detail & Related papers (2023-03-10T06:43:55Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - on the effectiveness of generative adversarial network on anomaly
detection [1.6244541005112747]
GANs rely on the rich contextual information of these models to identify the actual training distribution.
We suggest a new unsupervised model based on GANs --a combination of an autoencoder and a GAN.
A new scoring function was introduced to target anomalies where a linear combination of the internal representation of the discriminator and the generator's visual representation, plus the encoded representation of the autoencoder, come together to define the proposed anomaly score.
arXiv Detail & Related papers (2021-12-31T16:35:47Z) - A Unified View of cGANs with and without Classifiers [24.28407308818025]
Conditional Generative Adversarial Networks (cGANs) are implicit generative models which allow to sample from class-conditional distributions.
Some representative cGANs avoid the shortcoming and reach state-of-the-art performance without having classifiers.
In this work, we demonstrate that classifiers can be properly leveraged to improve cGANs.
arXiv Detail & Related papers (2021-11-01T15:36:33Z) - OMASGAN: Out-of-Distribution Minimum Anomaly Score GAN for Sample
Generation on the Boundary [0.0]
Generative models set high likelihood and low reconstruction loss to Out-of-Distribution (OoD) samples.
OMASGAN generates, in a negative data augmentation manner, anomalous samples on the estimated distribution boundary.
OMASGAN performs retraining by including the abnormal minimum-anomaly-score OoD samples generated on the distribution boundary.
arXiv Detail & Related papers (2021-10-28T16:35:30Z) - cGANs with Auxiliary Discriminative Classifier [43.78253518292111]
Conditional generative models aim to learn the underlying joint distribution of data and labels.
auxiliary classifier generative adversarial networks (AC-GAN) have been widely used, but suffer from the issue of low intra-class diversity on generated samples.
We propose novel cGANs with auxiliary discriminative classifier (ADC-GAN) to address the issue of AC-GAN.
arXiv Detail & Related papers (2021-07-21T13:06:32Z) - Understanding Classifier Mistakes with Generative Models [88.20470690631372]
Deep neural networks are effective on supervised learning tasks, but have been shown to be brittle.
In this paper, we leverage generative models to identify and characterize instances where classifiers fail to generalize.
Our approach is agnostic to class labels from the training set which makes it applicable to models trained in a semi-supervised way.
arXiv Detail & Related papers (2020-10-05T22:13:21Z) - GANs with Variational Entropy Regularizers: Applications in Mitigating
the Mode-Collapse Issue [95.23775347605923]
Building on the success of deep learning, Generative Adversarial Networks (GANs) provide a modern approach to learn a probability distribution from observed samples.
GANs often suffer from the mode collapse issue where the generator fails to capture all existing modes of the input distribution.
We take an information-theoretic approach and maximize a variational lower bound on the entropy of the generated samples to increase their diversity.
arXiv Detail & Related papers (2020-09-24T19:34:37Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.