Multi-Perspective Anomaly Detection
- URL: http://arxiv.org/abs/2105.09903v1
- Date: Thu, 20 May 2021 17:07:36 GMT
- Title: Multi-Perspective Anomaly Detection
- Authors: Manav Madan, Peter Jakob, Tobias Schmid-Schirling, Abhinav Valada
- Abstract summary: We build upon the deep support vector data description algorithm and address multi-perspective anomaly detection.
We employ different augmentation techniques with a denoising process to deal with scarce one-class data.
We evaluate our approach on the new dices dataset using images from two different perspectives and also benchmark on the standard MNIST dataset.
- Score: 3.3511723893430476
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-view classification is inspired by the behavior of humans, especially
when fine-grained features or in our case rarely occurring anomalies are to be
detected. Current contributions point to the problem of how high-dimensional
data can be fused. In this work, we build upon the deep support vector data
description algorithm and address multi-perspective anomaly detection using
three different fusion techniques i.e. early fusion, late fusion, and late
fusion with multiple decoders. We employ different augmentation techniques with
a denoising process to deal with scarce one-class data, which further improves
the performance (ROC AUC = 80\%). Furthermore, we introduce the dices dataset
that consists of over 2000 grayscale images of falling dices from multiple
perspectives, with 5\% of the images containing rare anomalies (e.g. drill
holes, sawing, or scratches). We evaluate our approach on the new dices dataset
using images from two different perspectives and also benchmark on the standard
MNIST dataset. Extensive experiments demonstrate that our proposed approach
exceeds the state-of-the-art on both the MNIST and dices datasets. To the best
of our knowledge, this is the first work that focuses on addressing
multi-perspective anomaly detection in images by jointly using different
perspectives together with one single objective function for anomaly detection.
Related papers
- Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - Improving Vision Anomaly Detection with the Guidance of Language
Modality [64.53005837237754]
This paper tackles the challenges for vision modality from a multimodal point of view.
We propose Cross-modal Guidance (CMG) to tackle the redundant information issue and sparse space issue.
To learn a more compact latent space for the vision anomaly detector, CMLE learns a correlation structure matrix from the language modality.
arXiv Detail & Related papers (2023-10-04T13:44:56Z) - Multimodal Industrial Anomaly Detection via Hybrid Fusion [59.16333340582885]
We propose a novel multimodal anomaly detection method with hybrid fusion scheme.
Our model outperforms the state-of-the-art (SOTA) methods on both detection and segmentation precision on MVTecD-3 AD dataset.
arXiv Detail & Related papers (2023-03-01T15:48:27Z) - Target-aware Dual Adversarial Learning and a Multi-scenario
Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [65.30079184700755]
This study addresses the issue of fusing infrared and visible images that appear differently for object detection.
Previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks.
This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network.
arXiv Detail & Related papers (2022-03-30T11:44:56Z) - AnoDFDNet: A Deep Feature Difference Network for Anomaly Detection [6.508649912734565]
We propose a novel anomaly detection (AD) approach of High-speed Train images based on convolutional neural networks and the Vision Transformer.
The proposed method detects abnormal difference between two images taken at different times of the same region.
arXiv Detail & Related papers (2022-03-29T02:24:58Z) - Efficient Anomaly Detection Using Self-Supervised Multi-Cue Tasks [2.9237210794416755]
We introduce novel discriminative and generative tasks which focus on different visual cues.
We present a new out-of-distribution detection function and highlight its better stability compared to other out-of-distribution detection methods.
Our model can more accurately learn highly discriminative features using these self-supervised tasks.
arXiv Detail & Related papers (2021-11-24T09:54:50Z) - Unsupervised Two-Stage Anomaly Detection [18.045265572566276]
Anomaly detection from a single image is challenging since anomaly data is always rare and can be with highly unpredictable types.
We propose a two-stage approach, which generates high-fidelity yet anomaly-free reconstructions.
Our method outperforms state-of-the-arts on four anomaly detection datasets.
arXiv Detail & Related papers (2021-03-22T08:57:27Z) - Multiresolution Knowledge Distillation for Anomaly Detection [10.799350080453982]
Unsupervised representation learning has proved to be a critical component of anomaly detection/localization in images.
The sample size is not often large enough to learn a rich generalizable representation through conventional techniques.
Here, we propose to use the "distillation" of features at various layers of an expert network, pre-trained on ImageNet, into a simpler cloner network to tackle both issues.
arXiv Detail & Related papers (2020-11-22T21:16:35Z) - Robust Data Hiding Using Inverse Gradient Attention [82.73143630466629]
In the data hiding task, each pixel of cover images should be treated differently since they have divergent tolerabilities.
We propose a novel deep data hiding scheme with Inverse Gradient Attention (IGA), combing the ideas of adversarial learning and attention mechanism.
Empirically, extensive experiments show that the proposed model outperforms the state-of-the-art methods on two prevalent datasets.
arXiv Detail & Related papers (2020-11-21T19:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.