Comparison of Anomaly Detectors: Context Matters
- URL: http://arxiv.org/abs/2012.06260v2
- Date: Fri, 18 Dec 2020 15:31:47 GMT
- Title: Comparison of Anomaly Detectors: Context Matters
- Authors: V\'it \v{S}kv\'ara, Jan Franc\r{u}, Mat\v{e}j Zorek, Tom\'a\v{s}
Pevn\'y, V\'aclav \v{S}m\'idl
- Abstract summary: The objective of this comparison is twofold: comparison of anomaly detection methods of various paradigms, and identification of sources of variability that can yield different results.
The best results on the image data were obtained either by a feature-matching GAN or a combination of variational autoencoder (VAE) and OC-SVM, depending on the experimental conditions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep generative models are challenging the classical methods in the field of
anomaly detection nowadays. Every new method provides evidence of outperforming
its predecessors, often with contradictory results. The objective of this
comparison is twofold: comparison of anomaly detection methods of various
paradigms, and identification of sources of variability that can yield
different results. The methods were compared on popular tabular and image
datasets. While the one class support-vector machine (OC-SVM) had no rival on
the tabular datasets, the best results on the image data were obtained either
by a feature-matching GAN or a combination of variational autoencoder (VAE) and
OC-SVM, depending on the experimental conditions. The main sources of
variability that can influence the performance of the methods were identified
to be: the range of searched hyper-parameters, the methodology of model
selection, and the choice of the anomalous samples. All our code and results
are available for download.
Related papers
- Can I trust my anomaly detection system? A case study based on explainable AI [0.4416503115535552]
This case study explores the robustness of an anomaly detection system based on variational autoencoder generative models.
The goal is to get a different perspective on the real performances of anomaly detectors that use reconstruction differences.
arXiv Detail & Related papers (2024-07-29T12:39:07Z) - Online-Adaptive Anomaly Detection for Defect Identification in Aircraft Assembly [4.387337528923525]
Anomaly detection deals with detecting deviations from established patterns within data.
We propose a novel framework for online-adaptive anomaly detection using transfer learning.
Experimental results showcase a detection accuracy exceeding 0.975, outperforming the state-of-the-art ET-NET approach.
arXiv Detail & Related papers (2024-06-18T15:11:44Z) - Convolutional autoencoder-based multimodal one-class classification [80.52334952912808]
One-class classification refers to approaches of learning using data from a single class only.
We propose a deep learning one-class classification method suitable for multimodal data.
arXiv Detail & Related papers (2023-09-25T12:31:18Z) - Revisiting the Evaluation of Image Synthesis with GANs [55.72247435112475]
This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models.
In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set.
arXiv Detail & Related papers (2023-04-04T17:54:32Z) - Fake It Till You Make It: Near-Distribution Novelty Detection by
Score-Based Generative Models [54.182955830194445]
existing models either fail or face a dramatic drop under the so-called near-distribution" setting.
We propose to exploit a score-based generative model to produce synthetic near-distribution anomalous data.
Our method improves the near-distribution novelty detection by 6% and passes the state-of-the-art by 1% to 5% across nine novelty detection benchmarks.
arXiv Detail & Related papers (2022-05-28T02:02:53Z) - IMACS: Image Model Attribution Comparison Summaries [16.80986701058596]
We introduce IMACS, a method that combines gradient-based model attributions with aggregation and visualization techniques.
IMACS extracts salient input features from an evaluation dataset, clusters them based on similarity, then visualizes differences in model attributions for similar input features.
We show how our technique can uncover behavioral differences caused by domain shift between two models trained on satellite images.
arXiv Detail & Related papers (2022-01-26T21:35:14Z) - Selecting Treatment Effects Models for Domain Adaptation Using Causal
Knowledge [82.5462771088607]
We propose a novel model selection metric specifically designed for ITE methods under the unsupervised domain adaptation setting.
In particular, we propose selecting models whose predictions of interventions' effects satisfy known causal structures in the target domain.
arXiv Detail & Related papers (2021-02-11T21:03:14Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z) - Categorical anomaly detection in heterogeneous data using minimum
description length clustering [3.871148938060281]
We propose a meta-algorithm for enhancing any MDL-based anomaly detection model to deal with heterogeneous data.
Our experimental results show that using a discrete mixture model provides competitive performance relative to two previous anomaly detection algorithms.
arXiv Detail & Related papers (2020-06-14T14:48:37Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z) - Towards Out-of-Distribution Detection with Divergence Guarantee in Deep
Generative Models [22.697643259435115]
Deep generative models may assign higher likelihood to out-of-distribution (OOD) data than in-distribution (ID) data.
We prove theorems to investigate the divergences in flow-based model.
We propose two group anomaly detection methods.
arXiv Detail & Related papers (2020-02-09T09:54:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.