Simple and Effective Prevention of Mode Collapse in Deep One-Class
Classification
- URL: http://arxiv.org/abs/2001.08873v4
- Date: Tue, 19 Jan 2021 06:45:48 GMT
- Title: Simple and Effective Prevention of Mode Collapse in Deep One-Class
Classification
- Authors: Penny Chong, Lukas Ruff, Marius Kloft, Alexander Binder
- Abstract summary: We propose two regularizers to prevent hypersphere collapse in deep SVDD.
The first regularizer is based on injecting random noise via the standard cross-entropy loss.
The second regularizer penalizes the minibatch variance when it becomes too small.
- Score: 93.2334223970488
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Anomaly detection algorithms find extensive use in various fields. This area
of research has recently made great advances thanks to deep learning. A recent
method, the deep Support Vector Data Description (deep SVDD), which is inspired
by the classic kernel-based Support Vector Data Description (SVDD), is capable
of simultaneously learning a feature representation of the data and a
data-enclosing hypersphere. The method has shown promising results in both
unsupervised and semi-supervised settings. However, deep SVDD suffers from
hypersphere collapse -- also known as mode collapse, if the architecture of the
model does not comply with certain architectural constraints, e.g. the removal
of bias terms. These constraints limit the adaptability of the model and in
some cases, may affect the model performance due to learning sub-optimal
features. In this work, we consider two regularizers to prevent hypersphere
collapse in deep SVDD. The first regularizer is based on injecting random noise
via the standard cross-entropy loss. The second regularizer penalizes the
minibatch variance when it becomes too small. Moreover, we introduce an
adaptive weighting scheme to control the amount of penalization between the
SVDD loss and the respective regularizer. Our proposed regularized variants of
deep SVDD show encouraging results and outperform a prominent state-of-the-art
method on a setup where the anomalies have no apparent geometrical structure.
Related papers
- Small Object Detection via Coarse-to-fine Proposal Generation and
Imitation Learning [52.06176253457522]
We propose a two-stage framework tailored for small object detection based on the Coarse-to-fine pipeline and Feature Imitation learning.
CFINet achieves state-of-the-art performance on the large-scale small object detection benchmarks, SODA-D and SODA-A.
arXiv Detail & Related papers (2023-08-18T13:13:09Z) - Combating Mode Collapse in GANs via Manifold Entropy Estimation [70.06639443446545]
Generative Adversarial Networks (GANs) have shown compelling results in various tasks and applications.
We propose a novel training pipeline to address the mode collapse issue of GANs.
arXiv Detail & Related papers (2022-08-25T12:33:31Z) - Semi-Supervised Temporal Action Detection with Proposal-Free Masking [134.26292288193298]
We propose a novel Semi-supervised Temporal action detection model based on PropOsal-free Temporal mask (SPOT)
SPOT outperforms state-of-the-art alternatives, often by a large margin.
arXiv Detail & Related papers (2022-07-14T16:58:47Z) - The KFIoU Loss for Rotated Object Detection [115.334070064346]
In this paper, we argue that one effective alternative is to devise an approximate loss who can achieve trend-level alignment with SkewIoU loss.
Specifically, we model the objects as Gaussian distribution and adopt Kalman filter to inherently mimic the mechanism of SkewIoU.
The resulting new loss called KFIoU is easier to implement and works better compared with exact SkewIoU.
arXiv Detail & Related papers (2022-01-29T10:54:57Z) - Flow-based SVDD for anomaly detection [12.319113026372966]
FlowSVDD is a flow-based one-class classifier for anomaly/outliers detection.
The proposed model is instantiated using flow-based models, which naturally prevents from collapsing of bounding hypersphere into a single point.
Experiments show that FlowSVDD achieves comparable results to the current state-of-the-art methods and significantly outperforms related deep SVDD methods on benchmark datasets.
arXiv Detail & Related papers (2021-08-10T20:33:15Z) - P-WAE: Generalized Patch-Wasserstein Autoencoder for Anomaly Screening [17.24628770042803]
We propose a novel Patch-wise Wasserstein AutoEncoder (P-WAE) architecture to alleviate those challenges.
In particular, a patch-wise variational inference model coupled with solving the jigsaw puzzle is designed.
Comprehensive experiments, conducted on the MVTec AD dataset, demonstrate the superior performance of our propo
arXiv Detail & Related papers (2021-08-09T05:31:45Z) - DASVDD: Deep Autoencoding Support Vector Data Descriptor for Anomaly
Detection [9.19194451963411]
Semi-supervised anomaly detection aims to detect anomalies from normal samples using a model that is trained on normal data.
We propose a method, DASVDD, that jointly learns the parameters of an autoencoder while minimizing the volume of an enclosing hyper-sphere on its latent representation.
arXiv Detail & Related papers (2021-06-09T21:57:41Z) - Preventing Posterior Collapse with Levenshtein Variational Autoencoder [61.30283661804425]
We propose to replace the evidence lower bound (ELBO) with a new objective which is simple to optimize and prevents posterior collapse.
We show that Levenstein VAE produces more informative latent representations than alternative approaches to preventing posterior collapse.
arXiv Detail & Related papers (2020-04-30T13:27:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.