Manifolds for Unsupervised Visual Anomaly Detection
- URL: http://arxiv.org/abs/2006.11364v1
- Date: Fri, 19 Jun 2020 20:41:58 GMT
- Title: Manifolds for Unsupervised Visual Anomaly Detection
- Authors: Louise Naud and Alexander Lavin
- Abstract summary: Unsupervised learning methods that don't necessarily encounter anomalies in training would be immensely useful.
We develop a novel hyperspherical Variational Auto-Encoder (VAE) via stereographic projections with a gyroplane layer.
We present state-of-the-art results on visual anomaly benchmarks in precision manufacturing and inspection, demonstrating real-world utility in industrial AI scenarios.
- Score: 79.22051549519989
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Anomalies are by definition rare, thus labeled examples are very limited or
nonexistent, and likely do not cover unforeseen scenarios. Unsupervised
learning methods that don't necessarily encounter anomalies in training would
be immensely useful. Generative vision models can be useful in this regard but
do not sufficiently represent normal and abnormal data distributions. To this
end, we propose constant curvature manifolds for embedding data distributions
in unsupervised visual anomaly detection. Through theoretical and empirical
explorations of manifold shapes, we develop a novel hyperspherical Variational
Auto-Encoder (VAE) via stereographic projections with a gyroplane layer - a
complete equivalent to the Poincar\'e VAE. This approach with manifold
projections is beneficial in terms of model generalization and can yield more
interpretable representations. We present state-of-the-art results on visual
anomaly benchmarks in precision manufacturing and inspection, demonstrating
real-world utility in industrial AI scenarios. We further demonstrate the
approach on the challenging problem of histopathology: our unsupervised
approach effectively detects cancerous brain tissue from noisy whole-slide
images, learning a smooth, latent organization of tissue types that provides an
interpretable decisions tool for medical professionals.
Related papers
- GLAD: Towards Better Reconstruction with Global and Local Adaptive Diffusion Models for Unsupervised Anomaly Detection [60.78684630040313]
Diffusion models tend to reconstruct normal counterparts of test images with certain noises added.
From the global perspective, the difficulty of reconstructing images with different anomalies is uneven.
We propose a global and local adaptive diffusion model (abbreviated to GLAD) for unsupervised anomaly detection.
arXiv Detail & Related papers (2024-06-11T17:27:23Z) - Toward Generalist Anomaly Detection via In-context Residual Learning with Few-shot Sample Prompts [25.629973843455495]
Generalist Anomaly Detection (GAD) aims to train one single detection model that can generalize to detect anomalies in diverse datasets from different application domains without further training on the target data.
We introduce a novel approach that learns an in-context residual learning model for GAD, termed InCTRL.
InCTRL is the best performer and significantly outperforms state-of-the-art competing methods.
arXiv Detail & Related papers (2024-03-11T08:07:46Z) - AnomalyDiffusion: Few-Shot Anomaly Image Generation with Diffusion Model [59.08735812631131]
Anomaly inspection plays an important role in industrial manufacture.
Existing anomaly inspection methods are limited in their performance due to insufficient anomaly data.
We propose AnomalyDiffusion, a novel diffusion-based few-shot anomaly generation model.
arXiv Detail & Related papers (2023-12-10T05:13:40Z) - Unsupervised Anomaly Detection via Nonlinear Manifold Learning [0.0]
Anomalies are samples that significantly deviate from the rest of the data and their detection plays a major role in building machine learning models.
We introduce a robust, efficient, and interpretable methodology based on nonlinear manifold learning to detect anomalies in unsupervised settings.
arXiv Detail & Related papers (2023-06-15T18:48:10Z) - Confidence-Aware and Self-Supervised Image Anomaly Localisation [7.099105239108548]
We discuss an improved self-supervised single-class training strategy that supports the approximation of probabilistic inference with loosen feature locality constraints.
Our method is integrated into several out-of-distribution (OOD) detection models and we show evidence that our method outperforms the state-of-the-art on various benchmark datasets.
arXiv Detail & Related papers (2023-03-23T12:48:47Z) - Prototypical Residual Networks for Anomaly Detection and Localization [80.5730594002466]
We propose a framework called Prototypical Residual Network (PRN)
PRN learns feature residuals of varying scales and sizes between anomalous and normal patterns to accurately reconstruct the segmentation maps of anomalous regions.
We present a variety of anomaly generation strategies that consider both seen and unseen appearance variance to enlarge and diversify anomalies.
arXiv Detail & Related papers (2022-12-05T05:03:46Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Constrained Contrastive Distribution Learning for Unsupervised Anomaly
Detection and Localisation in Medical Images [23.79184121052212]
Unsupervised anomaly detection (UAD) learns one-class classifiers exclusively with normal (i.e., healthy) images.
We propose a novel self-supervised representation learning method, called Constrained Contrastive Distribution learning for anomaly detection (CCD)
Our method outperforms current state-of-the-art UAD approaches on three different colonoscopy and fundus screening datasets.
arXiv Detail & Related papers (2021-03-05T01:56:58Z) - Self-Taught Semi-Supervised Anomaly Detection on Upper Limb X-rays [11.859913430860335]
Supervised deep networks take for granted a large number of annotations by radiologists.
Our approach's rationale is to use task pretext tasks to leverage unlabeled data.
We show that our method outperforms baselines across unsupervised and self-supervised anomaly detection settings.
arXiv Detail & Related papers (2021-02-19T12:32:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.