End-to-End Augmentation Hyperparameter Tuning for Self-Supervised
Anomaly Detection
- URL: http://arxiv.org/abs/2306.12033v1
- Date: Wed, 21 Jun 2023 05:48:51 GMT
- Title: End-to-End Augmentation Hyperparameter Tuning for Self-Supervised
Anomaly Detection
- Authors: Jaemin Yoo, Lingxiao Zhao, and Leman Akoglu
- Abstract summary: We introduce ST-SSAD (Self-Tuning Self-Supervised Anomaly Detection), the first systematic approach to tuning augmentation.
We show that tuning augmentation offers significant performance gains over current practices.
- Score: 21.97856757574274
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Self-supervised learning (SSL) has emerged as a promising paradigm that
presents self-generated supervisory signals to real-world problems, bypassing
the extensive manual labeling burden. SSL is especially attractive for
unsupervised tasks such as anomaly detection, where labeled anomalies are often
nonexistent and costly to obtain. While self-supervised anomaly detection
(SSAD) has seen a recent surge of interest, the literature has failed to treat
data augmentation as a hyperparameter. Meanwhile, recent works have reported
that the choice of augmentation has significant impact on detection
performance. In this paper, we introduce ST-SSAD (Self-Tuning Self-Supervised
Anomaly Detection), the first systematic approach to SSAD in regards to
rigorously tuning augmentation. To this end, our work presents two key
contributions. The first is a new unsupervised validation loss that quantifies
the alignment between the augmented training data and the (unlabeled) test
data. In principle we adopt transduction, quantifying the extent to which
augmentation mimics the true anomaly-generating mechanism, in contrast to
augmenting data with arbitrary pseudo anomalies without regard to test data.
Second, we present new differentiable augmentation functions, allowing data
augmentation hyperparameter(s) to be tuned end-to-end via our proposed
validation loss. Experiments on two testbeds with semantic class anomalies and
subtle industrial defects show that systematically tuning augmentation offers
significant performance gains over current practices.
Related papers
- Multiple Descents in Unsupervised Learning: The Role of Noise, Domain Shift and Anomalies [14.399035468023161]
We study the presence of double descent in unsupervised learning, an area that has received little attention and is not yet fully understood.
We use synthetic and real data and identify model-wise, epoch-wise, and sample-wise double descent for various applications.
arXiv Detail & Related papers (2024-06-17T16:24:23Z) - End-To-End Self-tuning Self-supervised Time Series Anomaly Detection [32.746688248671084]
Time series anomaly detection (TSAD) finds many applications such as monitoring environmental sensors, industry type, patient biomarkers, etc.
A two-fold challenge for TSAD is a versatile and unsupervised model that can detect various different types of time series anomalies.
We introduce TSAP for TSA "on autoPilot", which can (self)tune hyper parameters end-to-end.
arXiv Detail & Related papers (2024-04-03T16:57:26Z) - Uncertainty-Calibrated Test-Time Model Adaptation without Forgetting [55.17761802332469]
Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and test data by adapting a given model w.r.t. any test sample.
Prior methods perform backpropagation for each test sample, resulting in unbearable optimization costs to many applications.
We propose an Efficient Anti-Forgetting Test-Time Adaptation (EATA) method which develops an active sample selection criterion to identify reliable and non-redundant samples.
arXiv Detail & Related papers (2024-03-18T05:49:45Z) - Generating and Reweighting Dense Contrastive Patterns for Unsupervised
Anomaly Detection [59.34318192698142]
We introduce a prior-less anomaly generation paradigm and develop an innovative unsupervised anomaly detection framework named GRAD.
PatchDiff effectively expose various types of anomaly patterns.
experiments on both MVTec AD and MVTec LOCO datasets also support the aforementioned observation.
arXiv Detail & Related papers (2023-12-26T07:08:06Z) - RoSAS: Deep Semi-Supervised Anomaly Detection with
Contamination-Resilient Continuous Supervision [21.393509817509464]
This paper proposes a novel semi-supervised anomaly detection method, which devises textitcontamination-resilient continuous supervisory signals
Our approach significantly outperforms state-of-the-art competitors by 20%-30% in AUC-PR.
arXiv Detail & Related papers (2023-07-25T04:04:49Z) - DSV: An Alignment Validation Loss for Self-supervised Outlier Model
Selection [23.253175824487652]
Self-supervised learning (SSL) has proven effective in solving various problems by generating internal supervisory signals.
Unsupervised anomaly detection, which faces the high cost of obtaining true labels, is an area that can greatly benefit from SSL.
We propose DSV (Discordance and Separability Validation), an unsupervised validation loss to select high-performing detection models with effective augmentation HPs.
arXiv Detail & Related papers (2023-07-13T02:45:29Z) - Data Augmentation is a Hyperparameter: Cherry-picked Self-Supervision
for Unsupervised Anomaly Detection is Creating the Illusion of Success [30.409069707518466]
Self-supervised learning (SSL) has emerged as a promising alternative to create supervisory signals to real-world problems.
Recent works have reported that the type of augmentation has a significant impact on accuracy.
This work sets out to put image-based SSAD under a larger lens and investigate the role of data augmentation in SSAD.
arXiv Detail & Related papers (2022-08-16T13:09:25Z) - SLA$^2$P: Self-supervised Anomaly Detection with Adversarial
Perturbation [77.71161225100927]
Anomaly detection is a fundamental yet challenging problem in machine learning.
We propose a novel and powerful framework, dubbed as SLA$2$P, for unsupervised anomaly detection.
arXiv Detail & Related papers (2021-11-25T03:53:43Z) - Anomaly Detection Based on Selection and Weighting in Latent Space [73.01328671569759]
We propose a novel selection-and-weighting-based anomaly detection framework called SWAD.
Experiments on both benchmark and real-world datasets have shown the effectiveness and superiority of SWAD.
arXiv Detail & Related papers (2021-03-08T10:56:38Z) - ESAD: End-to-end Deep Semi-supervised Anomaly Detection [85.81138474858197]
We propose a new objective function that measures the KL-divergence between normal and anomalous data.
The proposed method significantly outperforms several state-of-the-arts on multiple benchmark datasets.
arXiv Detail & Related papers (2020-12-09T08:16:35Z) - SUOD: Accelerating Large-Scale Unsupervised Heterogeneous Outlier
Detection [63.253850875265115]
Outlier detection (OD) is a key machine learning (ML) task for identifying abnormal objects from general samples.
We propose a modular acceleration system, called SUOD, to address it.
arXiv Detail & Related papers (2020-03-11T00:22:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.