Unsupervised Anomaly Detection From Semantic Similarity Scores
- URL: http://arxiv.org/abs/2012.00461v3
- Date: Fri, 26 Mar 2021 08:40:34 GMT
- Title: Unsupervised Anomaly Detection From Semantic Similarity Scores
- Authors: Nima Rafiee, Rahil Gholamipoor, Markus Kollmann
- Abstract summary: We present a simple and generic framework, it SemSAD, that makes use of a semantic similarity score to carry out anomaly detection.
We are able to outperform previous approaches for anomaly, novelty, or out-of-distribution detection in the visual domain by a large margin.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Classifying samples as in-distribution or out-of-distribution (OOD) is a
challenging problem of anomaly detection and a strong test of the
generalisation power for models of the in-distribution. In this paper, we
present a simple and generic framework, {\it SemSAD}, that makes use of a
semantic similarity score to carry out anomaly detection. The idea is to first
find for any test example the semantically closest examples in the training
set, where the semantic relation between examples is quantified by the cosine
similarity between feature vectors that leave semantics unchanged under
transformations, such as geometric transformations (images), time shifts (audio
signals), and synonymous word substitutions (text). A trained discriminator is
then used to classify a test example as OOD if the semantic similarity to its
nearest neighbours is significantly lower than the corresponding similarity for
test examples from the in-distribution. We are able to outperform previous
approaches for anomaly, novelty, or out-of-distribution detection in the visual
domain by a large margin. In particular, we obtain AUROC values close to one
for the challenging task of detecting examples from CIFAR-10 as
out-of-distribution given CIFAR-100 as in-distribution, without making use of
label information.
Related papers
- Enhancing Anomaly Detection Generalization through Knowledge Exposure: The Dual Effects of Augmentation [9.740752855568202]
Anomaly detection involves identifying instances within a dataset that deviates from the norm and occur infrequently.
Current benchmarks tend to favor methods biased towards low diversity in normal data, which does not align with real-world scenarios.
We propose new testing protocols and a novel method called Knowledge Exposure (KE), which integrates external knowledge to comprehend concept dynamics.
arXiv Detail & Related papers (2024-06-15T12:37:36Z) - Invariant Anomaly Detection under Distribution Shifts: A Causal
Perspective [6.845698872290768]
Anomaly detection (AD) is the machine learning task of identifying highly discrepant abnormal samples.
Under the constraints of a distribution shift, the assumption that training samples and test samples are drawn from the same distribution breaks down.
We attempt to increase the resilience of anomaly detection models to different kinds of distribution shifts.
arXiv Detail & Related papers (2023-12-21T23:20:47Z) - Likelihood-Aware Semantic Alignment for Full-Spectrum
Out-of-Distribution Detection [24.145060992747077]
We propose a Likelihood-Aware Semantic Alignment (LSA) framework to promote the image-text correspondence into semantically high-likelihood regions.
Extensive experiments demonstrate the remarkable OOD detection performance of our proposed LSA, surpassing existing methods by a margin of $15.26%$ and $18.88%$ on two F-OOD benchmarks.
arXiv Detail & Related papers (2023-12-04T08:53:59Z) - Predicting Out-of-Domain Generalization with Neighborhood Invariance [59.05399533508682]
We propose a measure of a classifier's output invariance in a local transformation neighborhood.
Our measure is simple to calculate, does not depend on the test point's true label, and can be applied even in out-of-domain (OOD) settings.
In experiments on benchmarks in image classification, sentiment analysis, and natural language inference, we demonstrate a strong and robust correlation between our measure and actual OOD generalization.
arXiv Detail & Related papers (2022-07-05T14:55:16Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - UQGAN: A Unified Model for Uncertainty Quantification of Deep
Classifiers trained via Conditional GANs [9.496524884855559]
We present an approach to quantifying uncertainty for deep neural networks in image classification, based on generative adversarial networks (GANs)
Instead of shielding the entire in-distribution data with GAN generated OoD examples, we shield each class separately with out-of-class examples generated by a conditional GAN.
In particular, we improve over the OoD detection and FP detection performance of state-of-the-art GAN-training based classifiers.
arXiv Detail & Related papers (2022-01-31T14:42:35Z) - Adversarial Examples Detection with Bayesian Neural Network [57.185482121807716]
We propose a new framework to detect adversarial examples motivated by the observations that random components can improve the smoothness of predictors.
We propose a novel Bayesian adversarial example detector, short for BATer, to improve the performance of adversarial example detection.
arXiv Detail & Related papers (2021-05-18T15:51:24Z) - Toward Scalable and Unified Example-based Explanation and Outlier
Detection [128.23117182137418]
We argue for a broader adoption of prototype-based student networks capable of providing an example-based explanation for their prediction.
We show that our prototype-based networks beyond similarity kernels deliver meaningful explanations and promising outlier detection results without compromising classification accuracy.
arXiv Detail & Related papers (2020-11-11T05:58:17Z) - Learning explanations that are hard to vary [75.30552491694066]
We show that averaging across examples can favor memorization and patchwork' solutions that sew together different strategies.
We then propose and experimentally validate a simple alternative algorithm based on a logical AND.
arXiv Detail & Related papers (2020-09-01T10:17:48Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.