Rethinking Assumptions in Deep Anomaly Detection
- URL: http://arxiv.org/abs/2006.00339v2
- Date: Sat, 10 Jul 2021 10:11:26 GMT
- Title: Rethinking Assumptions in Deep Anomaly Detection
- Authors: Lukas Ruff, Robert A. Vandermeulen, Billy Joe Franks, Klaus-Robert
M\"uller, and Marius Kloft
- Abstract summary: We present results demonstrating that this intuition surprisingly seems not to extend to deep AD on images.
For a recent AD benchmark on ImageNet, classifiers trained to discern between normal samples and just a few (64) random natural images are able to outperform the current state of the art in deep AD.
- Score: 26.942031693233183
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Though anomaly detection (AD) can be viewed as a classification problem
(nominal vs. anomalous) it is usually treated in an unsupervised manner since
one typically does not have access to, or it is infeasible to utilize, a
dataset that sufficiently characterizes what it means to be "anomalous." In
this paper we present results demonstrating that this intuition surprisingly
seems not to extend to deep AD on images. For a recent AD benchmark on
ImageNet, classifiers trained to discern between normal samples and just a few
(64) random natural images are able to outperform the current state of the art
in deep AD. Experimentally we discover that the multiscale structure of image
data makes example anomalies exceptionally informative.
Related papers
- Anomaly Detection by Context Contrasting [57.695202846009714]
Anomaly detection focuses on identifying samples that deviate from the norm.
Recent advances in self-supervised learning have shown great promise in this regard.
We propose Con$$, which learns through context augmentations.
arXiv Detail & Related papers (2024-05-29T07:59:06Z) - Don't Miss Out on Novelty: Importance of Novel Features for Deep Anomaly
Detection [64.21963650519312]
Anomaly Detection (AD) is a critical task that involves identifying observations that do not conform to a learned model of normality.
We propose a novel approach to AD using explainability to capture such novel features as unexplained observations in the input space.
Our approach establishes a new state-of-the-art across multiple benchmarks, handling diverse anomaly types.
arXiv Detail & Related papers (2023-10-01T21:24:05Z) - That's BAD: Blind Anomaly Detection by Implicit Local Feature Clustering [28.296651124677556]
Setting blind anomaly detection (BAD) can be converted into a local outlier detection problem.
We propose a novel method named PatchCluster that can accurately detect image- and pixel-level anomalies.
Experimental results show that PatchCluster shows a promising performance without the knowledge of normal data.
arXiv Detail & Related papers (2023-07-06T18:17:43Z) - Are we certain it's anomalous? [57.729669157989235]
Anomaly detection in time series is a complex task since anomalies are rare due to highly non-linear temporal correlations.
Here we propose the novel use of Hyperbolic uncertainty for Anomaly Detection (HypAD)
HypAD learns self-supervisedly to reconstruct the input signal.
arXiv Detail & Related papers (2022-11-16T21:31:39Z) - Exposing Outlier Exposure: What Can Be Learned From Few, One, and Zero
Outlier Images [26.283734474660484]
We show that specialized AD learning methods seem actually superfluous and huge corpora of data expendable.
We investigate this phenomenon and reveal that one-class methods are more robust towards the particular choice of training outliers.
arXiv Detail & Related papers (2022-05-23T17:23:15Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - CutPaste: Self-Supervised Learning for Anomaly Detection and
Localization [59.719925639875036]
We propose a framework for building anomaly detectors using normal training data only.
We first learn self-supervised deep representations and then build a generative one-class classifier on learned representations.
Our empirical study on MVTec anomaly detection dataset demonstrates the proposed algorithm is general to be able to detect various types of real-world defects.
arXiv Detail & Related papers (2021-04-08T19:04:55Z) - Constrained Contrastive Distribution Learning for Unsupervised Anomaly
Detection and Localisation in Medical Images [23.79184121052212]
Unsupervised anomaly detection (UAD) learns one-class classifiers exclusively with normal (i.e., healthy) images.
We propose a novel self-supervised representation learning method, called Constrained Contrastive Distribution learning for anomaly detection (CCD)
Our method outperforms current state-of-the-art UAD approaches on three different colonoscopy and fundus screening datasets.
arXiv Detail & Related papers (2021-03-05T01:56:58Z) - Modeling the Distribution of Normal Data in Pre-Trained Deep Features
for Anomaly Detection [2.9864637081333085]
Anomaly Detection (AD) in images refers to identifying images and image substructures that deviate significantly from the norm.
We show that deep feature representations learned by discriminative models on large natural image datasets are well suited to describe normality.
arXiv Detail & Related papers (2020-05-28T16:43:41Z) - OIAD: One-for-all Image Anomaly Detection with Disentanglement Learning [23.48763375455514]
We propose a One-for-all Image Anomaly Detection system based on disentangled learning using only clean samples.
Our experiments with three datasets show that OIAD can detect over $90%$ of anomalies while maintaining a low false alarm rate.
arXiv Detail & Related papers (2020-01-18T09:57:37Z) - Deep Weakly-supervised Anomaly Detection [118.55172352231381]
Pairwise Relation prediction Network (PReNet) learns pairwise relation features and anomaly scores.
PReNet can detect any seen/unseen abnormalities that fit the learned pairwise abnormal patterns.
Empirical results on 12 real-world datasets show that PReNet significantly outperforms nine competing methods in detecting seen and unseen anomalies.
arXiv Detail & Related papers (2019-10-30T00:40:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.