MOCCA: Multi-Layer One-Class ClassificAtion for Anomaly Detection
- URL: http://arxiv.org/abs/2012.12111v2
- Date: Mon, 5 Apr 2021 09:40:17 GMT
- Title: MOCCA: Multi-Layer One-Class ClassificAtion for Anomaly Detection
- Authors: Fabio Valerio Massoli, Fabrizio Falchi, Alperen Kantarci, \c{S}eymanur
Akti, Hazim Kemal Ekenel, Giuseppe Amato
- Abstract summary: We propose our deep learning approach to the anomaly detection problem named Multi-LayerOne-Class Classification (MOCCA)
We explicitly leverage the piece-wise nature of deep neural networks by exploiting information extracted at different depths to detect abnormal data instances.
We show that our method reaches superior performances compared to the state-of-the-art approaches available in the literature.
- Score: 16.914663209964697
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Anomalies are ubiquitous in all scientific fields and can express an
unexpected event due to incomplete knowledge about the data distribution or an
unknown process that suddenly comes into play and distorts the observations.
Due to such events' rarity, it is common to train deep learning models on
"normal", i.e. non-anomalous, datasets only, thus letting the neural network to
model the distribution beneath the input data. In this context, we propose our
deep learning approach to the anomaly detection problem named
Multi-LayerOne-Class Classification (MOCCA). We explicitly leverage the
piece-wise nature of deep neural networks by exploiting information extracted
at different depths to detect abnormal data instances. We show how combining
the representations extracted from multiple layers of a model leads to higher
discrimination performance than typical approaches proposed in the literature
that are based neural networks' final output only. We propose to train the
model by minimizing the $L_2$ distance between the input representation and a
reference point, the anomaly-free training data centroid, at each considered
layer. We conduct extensive experiments on publicly available datasets for
anomaly detection, namely CIFAR10, MVTec AD, and ShanghaiTech, considering both
the single-image and video-based scenarios. We show that our method reaches
superior performances compared to the state-of-the-art approaches available in
the literature. Moreover, we provide a model analysis to give insight on how
our approach works.
Related papers
- Toward Generalist Anomaly Detection via In-context Residual Learning with Few-shot Sample Prompts [25.629973843455495]
Generalist Anomaly Detection (GAD) aims to train one single detection model that can generalize to detect anomalies in diverse datasets from different application domains without further training on the target data.
We introduce a novel approach that learns an in-context residual learning model for GAD, termed InCTRL.
InCTRL is the best performer and significantly outperforms state-of-the-art competing methods.
arXiv Detail & Related papers (2024-03-11T08:07:46Z) - COFT-AD: COntrastive Fine-Tuning for Few-Shot Anomaly Detection [19.946344683965425]
We propose a novel methodology to address the challenge of FSAD.
We employ a model pre-trained on a large source dataset to model weights.
We evaluate few-shot anomaly detection on on 3 controlled AD tasks and 4 real-world AD tasks to demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2024-02-29T09:48:19Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - Layer-wise Linear Mode Connectivity [52.6945036534469]
Averaging neural network parameters is an intuitive method for the knowledge of two independent models.
It is most prominently used in federated learning.
We analyse the performance of the models that result from averaging single, or groups.
arXiv Detail & Related papers (2023-07-13T09:39:10Z) - Y-GAN: Learning Dual Data Representations for Efficient Anomaly
Detection [0.0]
We propose a novel reconstruction-based model for anomaly detection, called Y-GAN.
The model consists of a Y-shaped auto-encoder and represents images in two separate latent spaces.
arXiv Detail & Related papers (2021-09-28T20:17:04Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Deep Visual Anomaly detection with Negative Learning [18.79849041106952]
In this paper, we propose anomaly detection with negative learning (ADNL), which employs the negative learning concept for the enhancement of anomaly detection.
The idea is to limit the reconstruction capability of a generative model using the given a small amount of anomaly examples.
This way, the network not only learns to reconstruct normal data but also encloses the normal distribution far from the possible distribution of anomalies.
arXiv Detail & Related papers (2021-05-24T01:48:44Z) - Discriminative-Generative Dual Memory Video Anomaly Detection [81.09977516403411]
Recently, people tried to use a few anomalies for video anomaly detection (VAD) instead of only normal data during the training process.
We propose a DiscRiminative-gEnerative duAl Memory (DREAM) anomaly detection model to take advantage of a few anomalies and solve data imbalance.
arXiv Detail & Related papers (2021-04-29T15:49:01Z) - Understanding Anomaly Detection with Deep Invertible Networks through
Hierarchies of Distributions and Features [4.25227087152716]
Convolutional networks learn similar low-level feature distributions when trained on any natural image dataset.
When the discriminative features between inliers and outliers are on a high-level, anomaly detection becomes particularly challenging.
We propose two methods to remove the negative impact of model bias and domain prior on detecting high-level differences.
arXiv Detail & Related papers (2020-06-18T20:56:14Z) - Contextual-Bandit Anomaly Detection for IoT Data in Distributed
Hierarchical Edge Computing [65.78881372074983]
IoT devices can hardly afford complex deep neural networks (DNN) models, and offloading anomaly detection tasks to the cloud incurs long delay.
We propose and build a demo for an adaptive anomaly detection approach for distributed hierarchical edge computing (HEC) systems.
We show that our proposed approach significantly reduces detection delay without sacrificing accuracy, as compared to offloading detection tasks to the cloud.
arXiv Detail & Related papers (2020-04-15T06:13:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.