Absolute-Unified Multi-Class Anomaly Detection via Class-Agnostic Distribution Alignment
- URL: http://arxiv.org/abs/2404.00724v2
- Date: Tue, 16 Apr 2024 13:28:22 GMT
- Title: Absolute-Unified Multi-Class Anomaly Detection via Class-Agnostic Distribution Alignment
- Authors: Jia Guo, Haonan Han, Shuai Lu, Weihang Zhang, Huiqi Li,
- Abstract summary: Unsupervised anomaly detection (UAD) methods build separate models for each object category.
Recent studies have proposed to train a unified model for multiple classes, namely model-unified UAD.
We present a simple yet powerful method to address multi-class anomaly detection without any class information, namely textitabsolute-unified UAD.
- Score: 27.375917265177847
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conventional unsupervised anomaly detection (UAD) methods build separate models for each object category. Recent studies have proposed to train a unified model for multiple classes, namely model-unified UAD. However, such methods still implement the unified model separately on each class during inference with respective anomaly decision thresholds, which hinders their application when the image categories are entirely unavailable. In this work, we present a simple yet powerful method to address multi-class anomaly detection without any class information, namely \textit{absolute-unified} UAD. We target the crux of prior works in this challenging setting: different objects have mismatched anomaly score distributions. We propose Class-Agnostic Distribution Alignment (CADA) to align the mismatched score distribution of each implicit class without knowing class information, which enables unified anomaly detection for all classes and samples. The essence of CADA is to predict each class's score distribution of normal samples given any image, normal or anomalous, of this class. As a general component, CADA can activate the potential of nearly all UAD methods under absolute-unified setting. Our approach is extensively evaluated under the proposed setting on two popular UAD benchmark datasets, MVTec AD and VisA, where we exceed previous state-of-the-art by a large margin.
Related papers
- Toward Multi-class Anomaly Detection: Exploring Class-aware Unified Model against Inter-class Interference [67.36605226797887]
We introduce a Multi-class Implicit Neural representation Transformer for unified Anomaly Detection (MINT-AD)
By learning the multi-class distributions, the model generates class-aware query embeddings for the transformer decoder.
MINT-AD can project category and position information into a feature embedding space, further supervised by classification and prior probability loss functions.
arXiv Detail & Related papers (2024-03-21T08:08:31Z) - Hierarchical Gaussian Mixture Normalizing Flow Modeling for Unified Anomaly Detection [12.065053799927506]
We propose a novel Hierarchical Gaussian mixture normalizing flow modeling method for accomplishing unified Anomaly Detection.
Our HGAD consists of two key components: inter-class Gaussian mixture modeling and intra-class mixed class centers learning.
We evaluate our method on four real-world AD benchmarks, where we can significantly improve the previous NF-based AD methods and also outperform the SOTA unified AD methods.
arXiv Detail & Related papers (2024-03-20T07:21:37Z) - Attention-based Class-Conditioned Alignment for Multi-Source Domain Adaptation of Object Detectors [11.616494893839757]
Domain adaptation methods for object detection (OD) strive to mitigate the impact of distribution shifts by promoting feature alignment across source and target domains.
Most state-of-the-art MSDA methods for OD perform feature alignment in a class-agnostic manner.
We propose an attention-based class-conditioned alignment method for MSDA that aligns instances of each object category across domains.
arXiv Detail & Related papers (2024-03-14T23:31:41Z) - Toward Generalist Anomaly Detection via In-context Residual Learning with Few-shot Sample Prompts [25.629973843455495]
Generalist Anomaly Detection (GAD) aims to train one single detection model that can generalize to detect anomalies in diverse datasets from different application domains without further training on the target data.
We introduce a novel approach that learns an in-context residual learning model for GAD, termed InCTRL.
InCTRL is the best performer and significantly outperforms state-of-the-art competing methods.
arXiv Detail & Related papers (2024-03-11T08:07:46Z) - Multi-Class Anomaly Detection based on Regularized Discriminative
Coupled hypersphere-based Feature Adaptation [85.15324009378344]
This paper introduces a new model by including class discriminative properties obtained by a modified Regularized Discriminative Variational Auto-Encoder (RD-VAE) in the feature extraction process.
The proposed Regularized Discriminative Coupled-hypersphere-based Feature Adaptation (RD-CFA) forms a solution for multi-class anomaly detection.
arXiv Detail & Related papers (2023-11-24T14:26:07Z) - Open-Vocabulary Video Anomaly Detection [57.552523669351636]
Video anomaly detection (VAD) with weak supervision has achieved remarkable performance in utilizing video-level labels to discriminate whether a video frame is normal or abnormal.
Recent studies attempt to tackle a more realistic setting, open-set VAD, which aims to detect unseen anomalies given seen anomalies and normal videos.
This paper takes a step further and explores open-vocabulary video anomaly detection (OVVAD), in which we aim to leverage pre-trained large models to detect and categorize seen and unseen anomalies.
arXiv Detail & Related papers (2023-11-13T02:54:17Z) - mixed attention auto encoder for multi-class industrial anomaly
detection [2.8519768339207356]
We propose a unified mixed-attention auto encoder (MAAE) to implement multi-class anomaly detection with a single model.
To alleviate the performance degradation due to the diverse distribution patterns of different categories, we employ spatial attentions and channel attentions.
MAAE delivers remarkable performances on the benchmark dataset compared with the state-of-the-art methods.
arXiv Detail & Related papers (2023-09-22T08:17:48Z) - Data-Efficient and Interpretable Tabular Anomaly Detection [54.15249463477813]
We propose a novel framework that adapts a white-box model class, Generalized Additive Models, to detect anomalies.
In addition, the proposed framework, DIAD, can incorporate a small amount of labeled data to further boost anomaly detection performances in semi-supervised settings.
arXiv Detail & Related papers (2022-03-03T22:02:56Z) - UMAD: Universal Model Adaptation under Domain and Category Shift [138.12678159620248]
Universal Model ADaptation (UMAD) framework handles both UDA scenarios without access to source data.
We develop an informative consistency score to help distinguish unknown samples from known samples.
Experiments on open-set and open-partial-set UDA scenarios demonstrate that UMAD exhibits comparable, if not superior, performance to state-of-the-art data-dependent methods.
arXiv Detail & Related papers (2021-12-16T01:22:59Z) - Self-Trained One-class Classification for Unsupervised Anomaly Detection [56.35424872736276]
Anomaly detection (AD) has various applications across domains, from manufacturing to healthcare.
In this work, we focus on unsupervised AD problems whose entire training data are unlabeled and may contain both normal and anomalous samples.
To tackle this problem, we build a robust one-class classification framework via data refinement.
We show that our method outperforms state-of-the-art one-class classification method by 6.3 AUC and 12.5 average precision.
arXiv Detail & Related papers (2021-06-11T01:36:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.