An Iterative Method for Unsupervised Robust Anomaly Detection Under Data
Contamination
- URL: http://arxiv.org/abs/2309.09436v1
- Date: Mon, 18 Sep 2023 02:36:19 GMT
- Title: An Iterative Method for Unsupervised Robust Anomaly Detection Under Data
Contamination
- Authors: Minkyung Kim, Jongmin Yu, Junsik Kim, Tae-Hyun Oh, Jun Kyun Choi
- Abstract summary: Most deep anomaly detection models are based on learning normality from datasets.
In practice, the normality assumption is often violated due to the nature of real data distributions.
We propose a learning framework to reduce this gap and achieve better normality representation.
- Score: 24.74938110451834
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most deep anomaly detection models are based on learning normality from
datasets due to the difficulty of defining abnormality by its diverse and
inconsistent nature. Therefore, it has been a common practice to learn
normality under the assumption that anomalous data are absent in a training
dataset, which we call normality assumption. However, in practice, the
normality assumption is often violated due to the nature of real data
distributions that includes anomalous tails, i.e., a contaminated dataset.
Thereby, the gap between the assumption and actual training data affects
detrimentally in learning of an anomaly detection model. In this work, we
propose a learning framework to reduce this gap and achieve better normality
representation. Our key idea is to identify sample-wise normality and utilize
it as an importance weight, which is updated iteratively during the training.
Our framework is designed to be model-agnostic and hyperparameter insensitive
so that it applies to a wide range of existing methods without careful
parameter tuning. We apply our framework to three different representative
approaches of deep anomaly detection that are classified into one-class
classification-, probabilistic model-, and reconstruction-based approaches. In
addition, we address the importance of a termination condition for iterative
methods and propose a termination criterion inspired by the anomaly detection
objective. We validate that our framework improves the robustness of the
anomaly detection models under different levels of contamination ratios on five
anomaly detection benchmark datasets and two image datasets. On various
contaminated datasets, our framework improves the performance of three
representative anomaly detection methods, measured by area under the ROC curve.
Related papers
- Anomaly Detection by Context Contrasting [57.695202846009714]
Anomaly detection focuses on identifying samples that deviate from the norm.
Recent advances in self-supervised learning have shown great promise in this regard.
We propose Con$$, which learns through context augmentations.
arXiv Detail & Related papers (2024-05-29T07:59:06Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - Video Anomaly Detection via Spatio-Temporal Pseudo-Anomaly Generation : A Unified Approach [49.995833831087175]
This work proposes a novel method for generating generic Video-temporal PAs by inpainting a masked out region of an image.
In addition, we present a simple unified framework to detect real-world anomalies under the OCC setting.
Our method performs on par with other existing state-of-the-art PAs generation and reconstruction based methods under the OCC setting.
arXiv Detail & Related papers (2023-11-27T13:14:06Z) - Unraveling the "Anomaly" in Time Series Anomaly Detection: A
Self-supervised Tri-domain Solution [89.16750999704969]
Anomaly labels hinder traditional supervised models in time series anomaly detection.
Various SOTA deep learning techniques, such as self-supervised learning, have been introduced to tackle this issue.
We propose a novel self-supervised learning based Tri-domain Anomaly Detector (TriAD)
arXiv Detail & Related papers (2023-11-19T05:37:18Z) - Active anomaly detection based on deep one-class classification [9.904380236739398]
We tackle two essential problems of active learning for Deep SVDD: query strategy and semi-supervised learning method.
First, rather than solely identifying anomalies, our query strategy selects uncertain samples according to an adaptive boundary.
Second, we apply noise contrastive estimation in training a one-class classification model to incorporate both labeled normal and abnormal data effectively.
arXiv Detail & Related papers (2023-09-18T03:56:45Z) - Hierarchical Semi-Supervised Contrastive Learning for
Contamination-Resistant Anomaly Detection [81.07346419422605]
Anomaly detection aims at identifying deviant samples from the normal data distribution.
Contrastive learning has provided a successful way to sample representation that enables effective discrimination on anomalies.
We propose a novel hierarchical semi-supervised contrastive learning framework, for contamination-resistant anomaly detection.
arXiv Detail & Related papers (2022-07-24T18:49:26Z) - Augment to Detect Anomalies with Continuous Labelling [10.646747658653785]
Anomaly detection is to recognize samples that differ in some respect from the training observations.
Recent state-of-the-art deep learning-based anomaly detection methods suffer from high computational cost, complexity, unstable training procedures, and non-trivial implementation.
We leverage a simple learning procedure that trains a lightweight convolutional neural network, reaching state-of-the-art performance in anomaly detection.
arXiv Detail & Related papers (2022-07-03T20:11:51Z) - Latent Outlier Exposure for Anomaly Detection with Contaminated Data [31.446666264334528]
Anomaly detection aims at identifying data points that show systematic deviations from the majority of data in an unlabeled dataset.
We propose a strategy for training an anomaly detector in the presence of unlabeled anomalies that is compatible with a broad class of models.
arXiv Detail & Related papers (2022-02-16T14:21:28Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Understanding the Effect of Bias in Deep Anomaly Detection [15.83398707988473]
Anomaly detection presents a unique challenge in machine learning, due to the scarcity of labeled anomaly data.
Recent work attempts to mitigate such problems by augmenting training of deep anomaly detection models with additional labeled anomaly samples.
In this paper, we aim to understand the effect of a biased anomaly set on anomaly detection.
arXiv Detail & Related papers (2021-05-16T03:55:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.