Exposing Outlier Exposure: What Can Be Learned From Few, One, and Zero
Outlier Images
- URL: http://arxiv.org/abs/2205.11474v1
- Date: Mon, 23 May 2022 17:23:15 GMT
- Title: Exposing Outlier Exposure: What Can Be Learned From Few, One, and Zero
Outlier Images
- Authors: Philipp Liznerski, Lukas Ruff, Robert A. Vandermeulen, Billy Joe
Franks, Klaus-Robert M\"uller, and Marius Kloft
- Abstract summary: We show that specialized AD learning methods seem actually superfluous and huge corpora of data expendable.
We investigate this phenomenon and reveal that one-class methods are more robust towards the particular choice of training outliers.
- Score: 26.283734474660484
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditionally anomaly detection (AD) is treated as an unsupervised problem
utilizing only normal samples due to the intractability of characterizing
everything that looks unlike the normal data. However, it has recently been
found that unsupervised image anomaly detection can be drastically improved
through the utilization of huge corpora of random images to represent
anomalousness; a technique which is known as Outlier Exposure. In this paper we
show that specialized AD learning methods seem actually superfluous and huge
corpora of data expendable. For a common AD benchmark on ImageNet, standard
classifiers and semi-supervised one-class methods trained to discern between
normal samples and just a few random natural images are able to outperform the
current state of the art in deep AD, and only one useful outlier sample is
sufficient to perform competitively. We investigate this phenomenon and reveal
that one-class methods are more robust towards the particular choice of
training outliers. Furthermore, we find that a simple classifier based on
representations from CLIP, a recent foundation model, achieves state-of-the-art
results on CIFAR-10 and also outperforms all previous AD methods on ImageNet
without any training samples (i.e., in a zero-shot setting).
Related papers
- Data-Independent Operator: A Training-Free Artifact Representation
Extractor for Generalizable Deepfake Detection [105.9932053078449]
In this work, we show that, on the contrary, the small and training-free filter is sufficient to capture more general artifact representations.
Due to its unbias towards both the training and test sources, we define it as Data-Independent Operator (DIO) to achieve appealing improvements on unseen sources.
Our detector achieves a remarkable improvement of $13.3%$, establishing a new state-of-the-art performance.
arXiv Detail & Related papers (2024-03-11T15:22:28Z) - Zero-Shot Anomaly Detection via Batch Normalization [58.291409630995744]
Anomaly detection plays a crucial role in many safety-critical application domains.
The challenge of adapting an anomaly detector to drift in the normal data distribution has led to the development of zero-shot AD techniques.
We propose a simple yet effective method called Adaptive Centered Representations (ACR) for zero-shot batch-level AD.
arXiv Detail & Related papers (2023-02-15T18:34:15Z) - Self-supervised Pseudo Multi-class Pre-training for Unsupervised Anomaly
Detection and Segmentation in Medical Images [31.676609117780114]
Unsupervised anomaly detection (UAD) methods are trained with normal (or healthy) images only, but during testing, they are able to classify normal and abnormal images.
We propose a new self-supervised pre-training method for MIA UAD applications, named Pseudo Multi-class Strong Augmentation via Contrastive Learning (PMSACL)
arXiv Detail & Related papers (2021-09-03T04:25:57Z) - Self-Trained One-class Classification for Unsupervised Anomaly Detection [56.35424872736276]
Anomaly detection (AD) has various applications across domains, from manufacturing to healthcare.
In this work, we focus on unsupervised AD problems whose entire training data are unlabeled and may contain both normal and anomalous samples.
To tackle this problem, we build a robust one-class classification framework via data refinement.
We show that our method outperforms state-of-the-art one-class classification method by 6.3 AUC and 12.5 average precision.
arXiv Detail & Related papers (2021-06-11T01:36:08Z) - Few-Shot Learning with Part Discovery and Augmentation from Unlabeled
Images [79.34600869202373]
We show that inductive bias can be learned from a flat collection of unlabeled images, and instantiated as transferable representations among seen and unseen classes.
Specifically, we propose a novel part-based self-supervised representation learning scheme to learn transferable representations.
Our method yields impressive results, outperforming the previous best unsupervised methods by 7.74% and 9.24%.
arXiv Detail & Related papers (2021-05-25T12:22:11Z) - CutPaste: Self-Supervised Learning for Anomaly Detection and
Localization [59.719925639875036]
We propose a framework for building anomaly detectors using normal training data only.
We first learn self-supervised deep representations and then build a generative one-class classifier on learned representations.
Our empirical study on MVTec anomaly detection dataset demonstrates the proposed algorithm is general to be able to detect various types of real-world defects.
arXiv Detail & Related papers (2021-04-08T19:04:55Z) - Constrained Contrastive Distribution Learning for Unsupervised Anomaly
Detection and Localisation in Medical Images [23.79184121052212]
Unsupervised anomaly detection (UAD) learns one-class classifiers exclusively with normal (i.e., healthy) images.
We propose a novel self-supervised representation learning method, called Constrained Contrastive Distribution learning for anomaly detection (CCD)
Our method outperforms current state-of-the-art UAD approaches on three different colonoscopy and fundus screening datasets.
arXiv Detail & Related papers (2021-03-05T01:56:58Z) - Unsupervised Noisy Tracklet Person Re-identification [100.85530419892333]
We present a novel selective tracklet learning (STL) approach that can train discriminative person re-id models from unlabelled tracklet data.
This avoids the tedious and costly process of exhaustively labelling person image/tracklet true matching pairs across camera views.
Our method is particularly more robust against arbitrary noisy data of raw tracklets therefore scalable to learning discriminative models from unconstrained tracking data.
arXiv Detail & Related papers (2021-01-16T07:31:00Z) - Understanding Anomaly Detection with Deep Invertible Networks through
Hierarchies of Distributions and Features [4.25227087152716]
Convolutional networks learn similar low-level feature distributions when trained on any natural image dataset.
When the discriminative features between inliers and outliers are on a high-level, anomaly detection becomes particularly challenging.
We propose two methods to remove the negative impact of model bias and domain prior on detecting high-level differences.
arXiv Detail & Related papers (2020-06-18T20:56:14Z) - Rethinking Assumptions in Deep Anomaly Detection [26.942031693233183]
We present results demonstrating that this intuition surprisingly seems not to extend to deep AD on images.
For a recent AD benchmark on ImageNet, classifiers trained to discern between normal samples and just a few (64) random natural images are able to outperform the current state of the art in deep AD.
arXiv Detail & Related papers (2020-05-30T19:30:38Z) - Modeling the Distribution of Normal Data in Pre-Trained Deep Features
for Anomaly Detection [2.9864637081333085]
Anomaly Detection (AD) in images refers to identifying images and image substructures that deviate significantly from the norm.
We show that deep feature representations learned by discriminative models on large natural image datasets are well suited to describe normality.
arXiv Detail & Related papers (2020-05-28T16:43:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.