A Uniform Framework for Anomaly Detection in Deep Neural Networks
- URL: http://arxiv.org/abs/2110.03092v1
- Date: Wed, 6 Oct 2021 22:42:30 GMT
- Title: A Uniform Framework for Anomaly Detection in Deep Neural Networks
- Authors: Fangzhen Zhao, Chenyi Zhang, Naipeng Dong, Zefeng You, Zhenxin Wu
- Abstract summary: We consider three classes of anomaly inputs,.
(1) natural inputs from a different distribution than the DNN is trained for, known as Out-of-Distribution (OOD) samples,.
(2) crafted inputs generated from ID by attackers, often known as adversarial (AD) samples, and (3) noise (NS) samples generated from meaningless data.
We propose a framework that aims to detect all these anomalies for a pre-trained DNN.
- Score: 0.5099811144731619
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNN) can achieve high performance when applied to
In-Distribution (ID) data which come from the same distribution as the training
set. When presented with anomaly inputs not from the ID, the outputs of a DNN
should be regarded as meaningless. However, modern DNN often predict anomaly
inputs as an ID class with high confidence, which is dangerous and misleading.
In this work, we consider three classes of anomaly inputs, (1) natural inputs
from a different distribution than the DNN is trained for, known as
Out-of-Distribution (OOD) samples, (2) crafted inputs generated from ID by
attackers, often known as adversarial (AD) samples, and (3) noise (NS) samples
generated from meaningless data. We propose a framework that aims to detect all
these anomalies for a pre-trained DNN. Unlike some of the existing works, our
method does not require preprocessing of input data, nor is it dependent to any
known OOD set or adversarial attack algorithm. Through extensive experiments
over a variety of DNN models for the detection of aforementioned anomalies, we
show that in most cases our method outperforms state-of-the-art anomaly
detection methods in identifying all three classes of anomalies.
Related papers
- Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - Unraveling the "Anomaly" in Time Series Anomaly Detection: A
Self-supervised Tri-domain Solution [89.16750999704969]
Anomaly labels hinder traditional supervised models in time series anomaly detection.
Various SOTA deep learning techniques, such as self-supervised learning, have been introduced to tackle this issue.
We propose a novel self-supervised learning based Tri-domain Anomaly Detector (TriAD)
arXiv Detail & Related papers (2023-11-19T05:37:18Z) - pseudo-Bayesian Neural Networks for detecting Out of Distribution Inputs [12.429095025814345]
We propose pseudo-BNNs where instead of learning distributions over weights, we use point estimates and perturb weights at the time of inference.
Overall, this combination results in a principled technique to detect OOD samples at the time of inference.
arXiv Detail & Related papers (2021-02-02T06:23:04Z) - Double-Adversarial Activation Anomaly Detection: Adversarial
Autoencoders are Anomaly Generators [0.0]
Anomaly detection is a challenging task for machine learning algorithms due to the inherent class imbalance.
Inspired by generative models and the analysis of the hidden activations of neural networks, we introduce a novel unsupervised anomaly detection method called DA3D.
arXiv Detail & Related papers (2021-01-12T18:07:34Z) - A Survey on Assessing the Generalization Envelope of Deep Neural
Networks: Predictive Uncertainty, Out-of-distribution and Adversarial Samples [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art performance on numerous applications.
It is difficult to tell beforehand if a DNN receiving an input will deliver the correct output since their decision criteria are usually nontransparent.
This survey connects the three fields within the larger framework of investigating the generalization performance of machine learning methods and in particular DNNs.
arXiv Detail & Related papers (2020-08-21T09:12:52Z) - A General Framework For Detecting Anomalous Inputs to DNN Classifiers [37.79389209020564]
We propose an unsupervised anomaly detection framework based on the internal deep neural network layer representations.
We evaluate the proposed methods on well-known image classification datasets with strong adversarial attacks and OOD inputs.
arXiv Detail & Related papers (2020-07-29T22:57:57Z) - NADS: Neural Architecture Distribution Search for Uncertainty Awareness [79.18710225716791]
Machine learning (ML) systems often encounter Out-of-Distribution (OoD) errors when dealing with testing data coming from a distribution different from training data.
Existing OoD detection approaches are prone to errors and even sometimes assign higher likelihoods to OoD samples.
We propose Neural Architecture Distribution Search (NADS) to identify common building blocks among all uncertainty-aware architectures.
arXiv Detail & Related papers (2020-06-11T17:39:07Z) - One Versus all for deep Neural Network Incertitude (OVNNI)
quantification [12.734278426543332]
We propose a new technique to quantify the epistemic uncertainty of data easily.
This method consists in mixing the predictions of an ensemble of DNNs trained to classify One class vs All the other classes (OVA) with predictions from a standard DNN trained to perform All vs All (AVA) classification.
arXiv Detail & Related papers (2020-06-01T14:06:12Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z) - Generalized ODIN: Detecting Out-of-distribution Image without Learning
from Out-of-distribution Data [87.61504710345528]
We propose two strategies for freeing a neural network from tuning with OoD data, while improving its OoD detection performance.
We specifically propose to decompose confidence scoring as well as a modified input pre-processing method.
Our further analysis on a larger scale image dataset shows that the two types of distribution shifts, specifically semantic shift and non-semantic shift, present a significant difference.
arXiv Detail & Related papers (2020-02-26T04:18:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.