TransductGAN: a Transductive Adversarial Model for Novelty Detection
- URL: http://arxiv.org/abs/2203.15406v2
- Date: Wed, 30 Mar 2022 10:10:46 GMT
- Title: TransductGAN: a Transductive Adversarial Model for Novelty Detection
- Authors: Najiba Toron, Janaina Mourao-Miranda, John Shawe-Taylor
- Abstract summary: A common setting for novelty detection is inductive whereby only examples of the negative class are available during training time.
Transductive novelty detection on the other hand has only witnessed a recent surge in interest, it not only makes use of the negative class during training but also incorporates the (unlabeled) test set to detect novel examples.
- Score: 4.574919718545737
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Novelty detection, a widely studied problem in machine learning, is the
problem of detecting a novel class of data that has not been previously
observed. A common setting for novelty detection is inductive whereby only
examples of the negative class are available during training time. Transductive
novelty detection on the other hand has only witnessed a recent surge in
interest, it not only makes use of the negative class during training but also
incorporates the (unlabeled) test set to detect novel examples. Several studies
have emerged under the transductive setting umbrella that have demonstrated its
advantage over its inductive counterpart. Depending on the assumptions about
the data, these methods go by different names (e.g. transductive novelty
detection, semi-supervised novelty detection, positive-unlabeled learning,
out-of-distribution detection). With the use of generative adversarial networks
(GAN), a segment of those studies have adopted a transductive setup in order to
learn how to generate examples of the novel class. In this study, we propose
TransductGAN, a transductive generative adversarial network that attempts to
learn how to generate image examples from both the novel and negative classes
by using a mixture of two Gaussians in the latent space. It achieves that by
incorporating an adversarial autoencoder with a GAN network, the ability to
generate examples of novel data points offers not only a visual representation
of novelties, but also overcomes the hurdle faced by many inductive methods of
how to tune the model hyperparameters at the decision rule level. Our model has
shown superior performance over state-of-the-art inductive and transductive
methods. Our study is fully reproducible with the code available publicly.
Related papers
- Unsupervised Novelty Detection Methods Benchmarking with Wavelet Decomposition [0.22369578015657962]
unsupervised machine learning algorithms for novelty detection are compared.
A new dataset is gathered from an actuator vibrating at specific frequencies to benchmark the algorithms and evaluate the framework.
Our findings offer valuable insights into the adaptability and robustness of unsupervised learning techniques for real-world novelty detection applications.
arXiv Detail & Related papers (2024-09-11T09:31:28Z) - Universal Novelty Detection Through Adaptive Contrastive Learning [9.302885112237618]
Novelty detection is a critical task for deploying machine learning models in the open world.
We experimentally show that existing methods falter in maintaining universality, which stems from their rigid inductive biases.
We propose a novel probabilistic auto-negative pair generation method AutoAugOOD, along with contrastive learning, to yield a universal novelty detector method.
arXiv Detail & Related papers (2024-08-20T12:46:23Z) - Tilt your Head: Activating the Hidden Spatial-Invariance of Classifiers [0.7704032792820767]
Deep neural networks are applied in more and more areas of everyday life.
They still lack essential abilities, such as robustly dealing with spatially transformed input signals.
We propose a novel technique to emulate such an inference process for neural nets.
arXiv Detail & Related papers (2024-05-06T09:47:29Z) - Don't Miss Out on Novelty: Importance of Novel Features for Deep Anomaly
Detection [64.21963650519312]
Anomaly Detection (AD) is a critical task that involves identifying observations that do not conform to a learned model of normality.
We propose a novel approach to AD using explainability to capture such novel features as unexplained observations in the input space.
Our approach establishes a new state-of-the-art across multiple benchmarks, handling diverse anomaly types.
arXiv Detail & Related papers (2023-10-01T21:24:05Z) - Few-shot Forgery Detection via Guided Adversarial Interpolation [56.59499187594308]
Existing forgery detection methods suffer from significant performance drops when applied to unseen novel forgery approaches.
We propose Guided Adversarial Interpolation (GAI) to overcome the few-shot forgery detection problem.
Our method is validated to be robust to choices of majority and minority forgery approaches.
arXiv Detail & Related papers (2022-04-12T16:05:10Z) - Adversarially Robust One-class Novelty Detection [83.1570537254877]
We show that existing novelty detectors are susceptible to adversarial examples.
We propose a defense strategy that manipulates the latent space of novelty detectors to improve the robustness against adversarial examples.
arXiv Detail & Related papers (2021-08-25T10:41:29Z) - Novelty Detection via Contrastive Learning with Negative Data
Augmentation [34.39521195691397]
We introduce a novel generative network framework for novelty detection.
Our model has significant superiority over cutting-edge novelty detectors.
Our model is more stable for training in a non-adversarial manner, compared to other adversarial based novelty detection methods.
arXiv Detail & Related papers (2021-06-18T07:26:15Z) - Understanding Classifier Mistakes with Generative Models [88.20470690631372]
Deep neural networks are effective on supervised learning tasks, but have been shown to be brittle.
In this paper, we leverage generative models to identify and characterize instances where classifiers fail to generalize.
Our approach is agnostic to class labels from the training set which makes it applicable to models trained in a semi-supervised way.
arXiv Detail & Related papers (2020-10-05T22:13:21Z) - CSI: Novelty Detection via Contrastive Learning on Distributionally
Shifted Instances [77.28192419848901]
We propose a simple, yet effective method named contrasting shifted instances (CSI)
In addition to contrasting a given sample with other instances as in conventional contrastive learning methods, our training scheme contrasts the sample with distributionally-shifted augmentations of itself.
Our experiments demonstrate the superiority of our method under various novelty detection scenarios.
arXiv Detail & Related papers (2020-07-16T08:32:56Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.