Understanding Anomaly Detection with Deep Invertible Networks through
Hierarchies of Distributions and Features
- URL: http://arxiv.org/abs/2006.10848v3
- Date: Mon, 2 Nov 2020 17:27:25 GMT
- Title: Understanding Anomaly Detection with Deep Invertible Networks through
Hierarchies of Distributions and Features
- Authors: Robin Tibor Schirrmeister, Yuxuan Zhou, Tonio Ball and Dan Zhang
- Abstract summary: Convolutional networks learn similar low-level feature distributions when trained on any natural image dataset.
When the discriminative features between inliers and outliers are on a high-level, anomaly detection becomes particularly challenging.
We propose two methods to remove the negative impact of model bias and domain prior on detecting high-level differences.
- Score: 4.25227087152716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep generative networks trained via maximum likelihood on a natural image
dataset like CIFAR10 often assign high likelihoods to images from datasets with
different objects (e.g., SVHN). We refine previous investigations of this
failure at anomaly detection for invertible generative networks and provide a
clear explanation of it as a combination of model bias and domain prior:
Convolutional networks learn similar low-level feature distributions when
trained on any natural image dataset and these low-level features dominate the
likelihood. Hence, when the discriminative features between inliers and
outliers are on a high-level, e.g., object shapes, anomaly detection becomes
particularly challenging. To remove the negative impact of model bias and
domain prior on detecting high-level differences, we propose two methods,
first, using the log likelihood ratios of two identical models, one trained on
the in-distribution data (e.g., CIFAR10) and the other one on a more general
distribution of images (e.g., 80 Million Tiny Images). We also derive a novel
outlier loss for the in-distribution network on samples from the more general
distribution to further improve the performance. Secondly, using a multi-scale
model like Glow, we show that low-level features are mainly captured at early
scales. Therefore, using only the likelihood contribution of the final scale
performs remarkably well for detecting high-level feature differences of the
out-of-distribution and the in-distribution. This method is especially useful
if one does not have access to a suitable general distribution. Overall, our
methods achieve strong anomaly detection performance in the unsupervised
setting, and only slightly underperform state-of-the-art classifier-based
methods in the supervised setting. Code can be found at
https://github.com/boschresearch/hierarchical_anomaly_detection.
Related papers
- Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - A Prototype-Based Neural Network for Image Anomaly Detection and Localization [10.830337829732915]
This paper proposes ProtoAD, a prototype-based neural network for image anomaly detection and localization.
First, the patch features of normal images are extracted by a deep network pre-trained on nature images.
ProtoAD achieves competitive performance compared to the state-of-the-art methods with a higher inference speed.
arXiv Detail & Related papers (2023-10-04T04:27:16Z) - CRADL: Contrastive Representations for Unsupervised Anomaly Detection
and Localization [2.8659934481869715]
Unsupervised anomaly detection in medical imaging aims to detect and localize arbitrary anomalies without requiring anomalous data during training.
Most current state-of-the-art methods use latent variable generative models operating directly on the images.
We propose CRADL whose core idea is to model the distribution of normal samples directly in the low-dimensional representation space of an encoder trained with a contrastive pretext-task.
arXiv Detail & Related papers (2023-01-05T16:07:49Z) - Fake It Till You Make It: Near-Distribution Novelty Detection by
Score-Based Generative Models [54.182955830194445]
existing models either fail or face a dramatic drop under the so-called near-distribution" setting.
We propose to exploit a score-based generative model to produce synthetic near-distribution anomalous data.
Our method improves the near-distribution novelty detection by 6% and passes the state-of-the-art by 1% to 5% across nine novelty detection benchmarks.
arXiv Detail & Related papers (2022-05-28T02:02:53Z) - Low-Light Image Enhancement with Normalizing Flow [92.52290821418778]
In this paper, we investigate to model this one-to-many relationship via a proposed normalizing flow model.
An invertible network that takes the low-light images/features as the condition and learns to map the distribution of normally exposed images into a Gaussian distribution.
The experimental results on the existing benchmark datasets show our method achieves better quantitative and qualitative results, obtaining better-exposed illumination, less noise and artifact, and richer colors.
arXiv Detail & Related papers (2021-09-13T12:45:08Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - MOCCA: Multi-Layer One-Class ClassificAtion for Anomaly Detection [16.914663209964697]
We propose our deep learning approach to the anomaly detection problem named Multi-LayerOne-Class Classification (MOCCA)
We explicitly leverage the piece-wise nature of deep neural networks by exploiting information extracted at different depths to detect abnormal data instances.
We show that our method reaches superior performances compared to the state-of-the-art approaches available in the literature.
arXiv Detail & Related papers (2020-12-09T08:32:56Z) - Multiresolution Knowledge Distillation for Anomaly Detection [10.799350080453982]
Unsupervised representation learning has proved to be a critical component of anomaly detection/localization in images.
The sample size is not often large enough to learn a rich generalizable representation through conventional techniques.
Here, we propose to use the "distillation" of features at various layers of an expert network, pre-trained on ImageNet, into a simpler cloner network to tackle both issues.
arXiv Detail & Related papers (2020-11-22T21:16:35Z) - Generalized ODIN: Detecting Out-of-distribution Image without Learning
from Out-of-distribution Data [87.61504710345528]
We propose two strategies for freeing a neural network from tuning with OoD data, while improving its OoD detection performance.
We specifically propose to decompose confidence scoring as well as a modified input pre-processing method.
Our further analysis on a larger scale image dataset shows that the two types of distribution shifts, specifically semantic shift and non-semantic shift, present a significant difference.
arXiv Detail & Related papers (2020-02-26T04:18:25Z) - Granular Learning with Deep Generative Models using Highly Contaminated
Data [0.0]
An approach to utilize recent advances in deep generative models for anomaly detection in a granular sense on a real-world image dataset with quality issues is detailed.
The approach is completely unsupervised (no annotations available) but qualitatively shown to provide accurate semantic labeling for images.
arXiv Detail & Related papers (2020-01-06T23:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.