What makes a good data augmentation for few-shot unsupervised image
anomaly detection?
- URL: http://arxiv.org/abs/2304.03294v3
- Date: Fri, 21 Apr 2023 03:00:18 GMT
- Title: What makes a good data augmentation for few-shot unsupervised image
anomaly detection?
- Authors: Lingrui Zhang, Shuheng Zhang, Guoyang Xie, Jiaqi Liu, Hua Yan, Jinbao
Wang, Feng Zheng, Yaochu Jin
- Abstract summary: The impact of various data augmentation methods on different anomaly detection algorithms is investigated.
Results show that the performance of different industrial image anomaly detection (termed as IAD) algorithms is not significantly affected by the specific data augmentation method employed.
- Score: 40.33586461619278
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Data augmentation is a promising technique for unsupervised anomaly detection
in industrial applications, where the availability of positive samples is often
limited due to factors such as commercial competition and sample collection
difficulties. In this paper, how to effectively select and apply data
augmentation methods for unsupervised anomaly detection is studied. The impact
of various data augmentation methods on different anomaly detection algorithms
is systematically investigated through experiments. The experimental results
show that the performance of different industrial image anomaly detection
(termed as IAD) algorithms is not significantly affected by the specific data
augmentation method employed and that combining multiple data augmentation
methods does not necessarily yield further improvements in the accuracy of
anomaly detection, although it can achieve excellent results on specific
methods. These findings provide useful guidance on selecting appropriate data
augmentation methods for different requirements in IAD.
Related papers
- Self-Supervised Time-Series Anomaly Detection Using Learnable Data Augmentation [37.72735288760648]
We propose a learnable data augmentation-based time-series anomaly detection (LATAD) technique that is trained in a self-supervised manner.
LATAD extracts discriminative features from time-series data through contrastive learning.
As per the results, LATAD exhibited comparable or improved performance to the state-of-the-art anomaly detection assessments.
arXiv Detail & Related papers (2024-06-18T04:25:56Z) - On the Universal Adversarial Perturbations for Efficient Data-free
Adversarial Detection [55.73320979733527]
We propose a data-agnostic adversarial detection framework, which induces different responses between normal and adversarial samples to UAPs.
Experimental results show that our method achieves competitive detection performance on various text classification tasks.
arXiv Detail & Related papers (2023-06-27T02:54:07Z) - AGAD: Adversarial Generative Anomaly Detection [12.68966318231776]
Anomaly detection suffered from the lack of anomalies due to the diversity of abnormalities and the difficulties of obtaining large-scale anomaly data.
We propose Adversarial Generative Anomaly Detection (AGAD), a self-contrast-based anomaly detection paradigm.
Our method generates pseudo-anomaly data for both supervised and semi-supervised anomaly detection scenarios.
arXiv Detail & Related papers (2023-04-09T10:40:02Z) - Towards Interpretable Anomaly Detection via Invariant Rule Mining [2.538209532048867]
In this work, we pursue highly interpretable anomaly detection via invariant rule mining.
Specifically, we leverage decision tree learning and association rule mining to automatically generate invariant rules.
The generated invariant rules can provide explicit explanation of anomaly detection results and thus are extremely useful for subsequent decision-making.
arXiv Detail & Related papers (2022-11-24T13:03:20Z) - An Outlier Exposure Approach to Improve Visual Anomaly Detection
Performance for Mobile Robots [76.36017224414523]
We consider the problem of building visual anomaly detection systems for mobile robots.
Standard anomaly detection models are trained using large datasets composed only of non-anomalous data.
We tackle the problem of exploiting these data to improve the performance of a Real-NVP anomaly detection model.
arXiv Detail & Related papers (2022-09-20T15:18:13Z) - Deep Anomaly Detection and Search via Reinforcement Learning [22.005663849044772]
We propose Deep Anomaly Detection and Search (DADS) to balance exploitation and exploration.
During the training process, DADS searches for possible anomalies with hierarchically-structured datasets.
Results show that DADS can efficiently and precisely search anomalies from unlabeled data and learn from them.
arXiv Detail & Related papers (2022-08-31T13:03:33Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Self-Trained One-class Classification for Unsupervised Anomaly Detection [56.35424872736276]
Anomaly detection (AD) has various applications across domains, from manufacturing to healthcare.
In this work, we focus on unsupervised AD problems whose entire training data are unlabeled and may contain both normal and anomalous samples.
To tackle this problem, we build a robust one-class classification framework via data refinement.
We show that our method outperforms state-of-the-art one-class classification method by 6.3 AUC and 12.5 average precision.
arXiv Detail & Related papers (2021-06-11T01:36:08Z) - Toward Deep Supervised Anomaly Detection: Reinforcement Learning from
Partially Labeled Anomaly Data [150.9270911031327]
We consider the problem of anomaly detection with a small set of partially labeled anomaly examples and a large-scale unlabeled dataset.
Existing related methods either exclusively fit the limited anomaly examples that typically do not span the entire set of anomalies, or proceed with unsupervised learning from the unlabeled data.
We propose here instead a deep reinforcement learning-based approach that enables an end-to-end optimization of the detection of both labeled and unlabeled anomalies.
arXiv Detail & Related papers (2020-09-15T03:05:39Z) - The Impact of Discretization Method on the Detection of Six Types of
Anomalies in Datasets [0.0]
Anomaly detection is the process of identifying cases, or groups of cases, that are in some way unusual and do not fit the general patterns present in the dataset.
Numerous algorithms use discretization of numerical data in their detection processes.
This study investigates the effect of the discretization method on the unsupervised detection of each of the six anomaly types acknowledged in a recent typology of data anomalies.
arXiv Detail & Related papers (2020-08-27T18:43:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.