Discriminative-Generative Representation Learning for One-Class Anomaly
Detection
- URL: http://arxiv.org/abs/2107.12753v1
- Date: Tue, 27 Jul 2021 11:46:15 GMT
- Title: Discriminative-Generative Representation Learning for One-Class Anomaly
Detection
- Authors: Xuan Xia, Xizhou Pan, Xing He, Jingfei Zhang, Ning Ding and Lin Ma
- Abstract summary: We propose a self-supervised learning framework combining generative methods and discriminative methods.
Our method significantly outperforms several state-of-the-arts on multiple benchmark data sets.
- Score: 22.500931323372303
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a kind of generative self-supervised learning methods, generative
adversarial nets have been widely studied in the field of anomaly detection.
However, the representation learning ability of the generator is limited since
it pays too much attention to pixel-level details, and generator is difficult
to learn abstract semantic representations from label prediction pretext tasks
as effective as discriminator. In order to improve the representation learning
ability of generator, we propose a self-supervised learning framework combining
generative methods and discriminative methods. The generator no longer learns
representation by reconstruction error, but the guidance of discriminator, and
could benefit from pretext tasks designed for discriminative methods. Our
discriminative-generative representation learning method has performance close
to discriminative methods and has a great advantage in speed. Our method used
in one-class anomaly detection task significantly outperforms several
state-of-the-arts on multiple benchmark data sets, increases the performance of
the top-performing GAN-based baseline by 6% on CIFAR-10 and 2% on MVTAD.
Related papers
- A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Prompt Optimization via Adversarial In-Context Learning [51.18075178593142]
adv-ICL is implemented as a two-player game between a generator and a discriminator.
The generator tries to generate realistic enough output to fool the discriminator.
We show that adv-ICL results in significant improvements over state-of-the-art prompt optimization techniques.
arXiv Detail & Related papers (2023-12-05T09:44:45Z) - Dynamically Masked Discriminator for Generative Adversarial Networks [71.33631511762782]
Training Generative Adversarial Networks (GANs) remains a challenging problem.
Discriminator trains the generator by learning the distribution of real/generated data.
We propose a novel method for GANs from the viewpoint of online continual learning.
arXiv Detail & Related papers (2023-06-13T12:07:01Z) - Learning Common Rationale to Improve Self-Supervised Representation for
Fine-Grained Visual Recognition Problems [61.11799513362704]
We propose learning an additional screening mechanism to identify discriminative clues commonly seen across instances and classes.
We show that a common rationale detector can be learned by simply exploiting the GradCAM induced from the SSL objective.
arXiv Detail & Related papers (2023-03-03T02:07:40Z) - Multi-Fake Evolutionary Generative Adversarial Networks for Imbalance
Hyperspectral Image Classification [7.9067022260826265]
This paper presents a novel multi-fake evolutionary generative adversarial network for handling imbalance hyperspectral image classification.
Different generative objective losses are considered in the generator network to improve the classification performance of the discriminator network.
The effectiveness of the proposed method has been validated through two hyperspectral spatial-spectral data sets.
arXiv Detail & Related papers (2021-11-07T07:29:24Z) - Hybrid Generative-Contrastive Representation Learning [32.84066504783469]
We show that a transformer-based encoder-decoder architecture trained with both contrastive and generative losses can learn highly discriminative and robust representations without hurting the generative performance.
arXiv Detail & Related papers (2021-06-11T04:23:48Z) - Data-Efficient Instance Generation from Instance Discrimination [40.71055888512495]
We propose a data-efficient Instance Generation (InsGen) method based on instance discrimination.
In this work, we propose a data-efficient Instance Generation (InsGen) method based on instance discrimination.
arXiv Detail & Related papers (2021-06-08T17:52:59Z) - Minimax Active Learning [61.729667575374606]
Active learning aims to develop label-efficient algorithms by querying the most representative samples to be labeled by a human annotator.
Current active learning techniques either rely on model uncertainty to select the most uncertain samples or use clustering or reconstruction to choose the most diverse set of unlabeled examples.
We develop a semi-supervised minimax entropy-based active learning algorithm that leverages both uncertainty and diversity in an adversarial manner.
arXiv Detail & Related papers (2020-12-18T19:03:40Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Discriminative feature generation for classification of imbalanced data [6.458496335718508]
We propose a novel supervised discriminative feature generation (DFG) method for a minority class dataset.
DFG is based on the modified structure of a generative adversarial network consisting of four independent networks.
The experimental results show that the DFG generator enhances the augmentation of the label-preserved and diverse features.
arXiv Detail & Related papers (2020-10-24T12:19:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.