DSV: An Alignment Validation Loss for Self-supervised Outlier Model
Selection
- URL: http://arxiv.org/abs/2307.06534v1
- Date: Thu, 13 Jul 2023 02:45:29 GMT
- Title: DSV: An Alignment Validation Loss for Self-supervised Outlier Model
Selection
- Authors: Jaemin Yoo, Yue Zhao, Lingxiao Zhao, and Leman Akoglu
- Abstract summary: Self-supervised learning (SSL) has proven effective in solving various problems by generating internal supervisory signals.
Unsupervised anomaly detection, which faces the high cost of obtaining true labels, is an area that can greatly benefit from SSL.
We propose DSV (Discordance and Separability Validation), an unsupervised validation loss to select high-performing detection models with effective augmentation HPs.
- Score: 23.253175824487652
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Self-supervised learning (SSL) has proven effective in solving various
problems by generating internal supervisory signals. Unsupervised anomaly
detection, which faces the high cost of obtaining true labels, is an area that
can greatly benefit from SSL. However, recent literature suggests that tuning
the hyperparameters (HP) of data augmentation functions is crucial to the
success of SSL-based anomaly detection (SSAD), yet a systematic method for
doing so remains unknown. In this work, we propose DSV (Discordance and
Separability Validation), an unsupervised validation loss to select
high-performing detection models with effective augmentation HPs. DSV captures
the alignment between an augmentation function and the anomaly-generating
mechanism with surrogate losses, which approximate the discordance and
separability of test data, respectively. As a result, the evaluation via DSV
leads to selecting an effective SSAD model exhibiting better alignment, which
results in high detection accuracy. We theoretically derive the degree of
approximation conducted by the surrogate losses and empirically show that DSV
outperforms a wide range of baselines on 21 real-world tasks.
Related papers
- Deep Learning for Network Anomaly Detection under Data Contamination: Evaluating Robustness and Mitigating Performance Degradation [0.0]
Deep learning (DL) has emerged as a crucial tool in network anomaly detection (NAD) for cybersecurity.
While DL models for anomaly detection excel at extracting features and learning patterns from data, they are vulnerable to data contamination.
This study evaluates the robustness of six unsupervised DL algorithms against data contamination.
arXiv Detail & Related papers (2024-07-11T19:47:37Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - Boosting Transformer's Robustness and Efficacy in PPG Signal Artifact
Detection with Self-Supervised Learning [0.0]
This study addresses the underutilization of abundant unlabeled data by employing self-supervised learning (SSL) to extract latent features from this data.
Our experiments demonstrate that SSL significantly enhances the Transformer model's ability to learn representations.
This approach holds promise for broader applications in PICU environments, where annotated data is often limited.
arXiv Detail & Related papers (2024-01-02T04:00:48Z) - ADT: Agent-based Dynamic Thresholding for Anomaly Detection [4.356615197661274]
We propose an agent-based dynamic thresholding (ADT) framework based on a deep Q-network.
An auto-encoder is utilized in this study to obtain feature representations and produce anomaly scores for complex input data.
ADT can adjust thresholds adaptively by utilizing the anomaly scores from the auto-encoder.
arXiv Detail & Related papers (2023-12-03T19:07:30Z) - Self-Supervision for Tackling Unsupervised Anomaly Detection: Pitfalls
and Opportunities [50.231837687221685]
Self-supervised learning (SSL) has transformed machine learning and its many real world applications.
Unsupervised anomaly detection (AD) has also capitalized on SSL, by self-generating pseudo-anomalies.
arXiv Detail & Related papers (2023-08-28T07:55:01Z) - Causal Disentanglement Hidden Markov Model for Fault Diagnosis [55.90917958154425]
We propose a Causal Disentanglement Hidden Markov model (CDHM) to learn the causality in the bearing fault mechanism.
Specifically, we make full use of the time-series data and progressively disentangle the vibration signal into fault-relevant and fault-irrelevant factors.
To expand the scope of the application, we adopt unsupervised domain adaptation to transfer the learned disentangled representations to other working environments.
arXiv Detail & Related papers (2023-08-06T05:58:45Z) - End-to-End Augmentation Hyperparameter Tuning for Self-Supervised
Anomaly Detection [21.97856757574274]
We introduce ST-SSAD (Self-Tuning Self-Supervised Anomaly Detection), the first systematic approach to tuning augmentation.
We show that tuning augmentation offers significant performance gains over current practices.
arXiv Detail & Related papers (2023-06-21T05:48:51Z) - Confidence Attention and Generalization Enhanced Distillation for
Continuous Video Domain Adaptation [62.458968086881555]
Continuous Video Domain Adaptation (CVDA) is a scenario where a source model is required to adapt to a series of individually available changing target domains.
We propose a Confidence-Attentive network with geneRalization enhanced self-knowledge disTillation (CART) to address the challenge in CVDA.
arXiv Detail & Related papers (2023-03-18T16:40:10Z) - Data Augmentation is a Hyperparameter: Cherry-picked Self-Supervision
for Unsupervised Anomaly Detection is Creating the Illusion of Success [30.409069707518466]
Self-supervised learning (SSL) has emerged as a promising alternative to create supervisory signals to real-world problems.
Recent works have reported that the type of augmentation has a significant impact on accuracy.
This work sets out to put image-based SSAD under a larger lens and investigate the role of data augmentation in SSAD.
arXiv Detail & Related papers (2022-08-16T13:09:25Z) - Anomaly Detection Based on Selection and Weighting in Latent Space [73.01328671569759]
We propose a novel selection-and-weighting-based anomaly detection framework called SWAD.
Experiments on both benchmark and real-world datasets have shown the effectiveness and superiority of SWAD.
arXiv Detail & Related papers (2021-03-08T10:56:38Z) - SUOD: Accelerating Large-Scale Unsupervised Heterogeneous Outlier
Detection [63.253850875265115]
Outlier detection (OD) is a key machine learning (ML) task for identifying abnormal objects from general samples.
We propose a modular acceleration system, called SUOD, to address it.
arXiv Detail & Related papers (2020-03-11T00:22:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.