AnomalyHybrid: A Domain-agnostic Generative Framework for General Anomaly Detection
- URL: http://arxiv.org/abs/2504.04340v1
- Date: Sun, 06 Apr 2025 03:28:30 GMT
- Title: AnomalyHybrid: A Domain-agnostic Generative Framework for General Anomaly Detection
- Authors: Ying Zhao,
- Abstract summary: AnomalyHybrid is a domain-agnostic framework designed to generate authentic and diverse anomalies.<n>AnomalyHybrid is a Generative Adversarial Network(GAN)-based framework having two decoders that integrate the appearance of reference image into the depth and edge structures of target image respectively.
- Score: 3.180143442781838
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Anomaly generation is an effective way to mitigate data scarcity for anomaly detection task. Most existing works shine at industrial anomaly generation with multiple specialists or large generative models, rarely generalizing to anomalies in other applications. In this paper, we present AnomalyHybrid, a domain-agnostic framework designed to generate authentic and diverse anomalies simply by combining the reference and target images. AnomalyHybrid is a Generative Adversarial Network(GAN)-based framework having two decoders that integrate the appearance of reference image into the depth and edge structures of target image respectively. With the help of depth decoders, AnomalyHybrid achieves authentic generation especially for the anomalies with depth values changing, such a s protrusion and dent. More, it relaxes the fine granularity structural control of the edge decoder and brings more diversity. Without using annotations, AnomalyHybrid is easily trained with sets of color, depth and edge of same images having different augmentations. Extensive experiments carried on HeliconiusButterfly, MVTecAD and MVTec3D datasets demonstrate that AnomalyHybrid surpasses the GAN-based state-of-the-art on anomaly generation and its downstream anomaly classification, detection and segmentation tasks. On MVTecAD dataset, AnomalyHybrid achieves 2.06/0.32 IS/LPIPS for anomaly generation, 52.6 Acc for anomaly classification with ResNet34, 97.3/72.9 AP for image/pixel-level anomaly detection with a simple UNet.
Related papers
- Bi-Grid Reconstruction for Image Anomaly Detection [0.0]
This paper introduces textbfGRAD: Bi-textbfGrid textbfReconstruction for Image textbfAnomaly textbfDetection.<n>It employs two continuous grids to enhance anomaly detection from both normal and abnormal perspectives.<n>It excels in overall accuracy and in discerning subtle differences, demonstrating its superiority over existing methods.
arXiv Detail & Related papers (2025-04-01T10:06:38Z) - 3CAD: A Large-Scale Real-World 3C Product Dataset for Unsupervised Anomaly [22.150521360544744]
We propose a new large-scale anomaly detection dataset called 3CAD.<n>3CAD includes eight different types of manufactured parts, totaling 27,039 high- resolution images labeled with pixel-level anomalies.<n>This is the largest and first anomaly de-tection dataset dedicated to 3C product quality control.
arXiv Detail & Related papers (2025-02-09T03:37:54Z) - Dual-Interrelated Diffusion Model for Few-Shot Anomaly Image Generation [22.164957586513776]
The performance of anomaly inspection in industrial manufacturing is constrained by the scarcity of anomaly data.
We propose DualAnoDiff, a novel diffusion-based few-shot anomaly image generation model.
Our approach significantly improves the performance of downstream anomaly inspection tasks, including anomaly detection, anomaly localization, and anomaly classification tasks.
arXiv Detail & Related papers (2024-08-24T08:09:32Z) - Spatial-aware Attention Generative Adversarial Network for Semi-supervised Anomaly Detection in Medical Image [63.59114880750643]
We introduce a novel Spatial-aware Attention Generative Adrialversa Network (SAGAN) for one-class semi-supervised generation of health images.
SAGAN generates high-quality health images corresponding to unlabeled data, guided by the reconstruction of normal images and restoration of pseudo-anomaly images.
Extensive experiments on three medical datasets demonstrate that the proposed SAGAN outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2024-05-21T15:41:34Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - AnomalyDiffusion: Few-Shot Anomaly Image Generation with Diffusion Model [59.08735812631131]
Anomaly inspection plays an important role in industrial manufacture.
Existing anomaly inspection methods are limited in their performance due to insufficient anomaly data.
We propose AnomalyDiffusion, a novel diffusion-based few-shot anomaly generation model.
arXiv Detail & Related papers (2023-12-10T05:13:40Z) - Video Anomaly Detection via Spatio-Temporal Pseudo-Anomaly Generation : A Unified Approach [49.995833831087175]
This work proposes a novel method for generating generic Video-temporal PAs by inpainting a masked out region of an image.
In addition, we present a simple unified framework to detect real-world anomalies under the OCC setting.
Our method performs on par with other existing state-of-the-art PAs generation and reconstruction based methods under the OCC setting.
arXiv Detail & Related papers (2023-11-27T13:14:06Z) - Prototypical Residual Networks for Anomaly Detection and Localization [80.5730594002466]
We propose a framework called Prototypical Residual Network (PRN)
PRN learns feature residuals of varying scales and sizes between anomalous and normal patterns to accurately reconstruct the segmentation maps of anomalous regions.
We present a variety of anomaly generation strategies that consider both seen and unseen appearance variance to enlarge and diversify anomalies.
arXiv Detail & Related papers (2022-12-05T05:03:46Z) - Self-Supervised Masked Convolutional Transformer Block for Anomaly
Detection [122.4894940892536]
We present a novel self-supervised masked convolutional transformer block (SSMCTB) that comprises the reconstruction-based functionality at a core architectural level.
In this work, we extend our previous self-supervised predictive convolutional attentive block (SSPCAB) with a 3D masked convolutional layer, a transformer for channel-wise attention, as well as a novel self-supervised objective based on Huber loss.
arXiv Detail & Related papers (2022-09-25T04:56:10Z) - Unsupervised Two-Stage Anomaly Detection [18.045265572566276]
Anomaly detection from a single image is challenging since anomaly data is always rare and can be with highly unpredictable types.
We propose a two-stage approach, which generates high-fidelity yet anomaly-free reconstructions.
Our method outperforms state-of-the-arts on four anomaly detection datasets.
arXiv Detail & Related papers (2021-03-22T08:57:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.