Hierarchical Semi-Supervised Contrastive Learning for
Contamination-Resistant Anomaly Detection
- URL: http://arxiv.org/abs/2207.11789v1
- Date: Sun, 24 Jul 2022 18:49:26 GMT
- Title: Hierarchical Semi-Supervised Contrastive Learning for
Contamination-Resistant Anomaly Detection
- Authors: Gaoang Wang, Yibing Zhan, Xinchao Wang, Mingli Song, Klara Nahrstedt
- Abstract summary: Anomaly detection aims at identifying deviant samples from the normal data distribution.
Contrastive learning has provided a successful way to sample representation that enables effective discrimination on anomalies.
We propose a novel hierarchical semi-supervised contrastive learning framework, for contamination-resistant anomaly detection.
- Score: 81.07346419422605
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Anomaly detection aims at identifying deviant samples from the normal data
distribution. Contrastive learning has provided a successful way to sample
representation that enables effective discrimination on anomalies. However,
when contaminated with unlabeled abnormal samples in training set under
semi-supervised settings, current contrastive-based methods generally 1) ignore
the comprehensive relation between training data, leading to suboptimal
performance, and 2) require fine-tuning, resulting in low efficiency. To
address the above two issues, in this paper, we propose a novel hierarchical
semi-supervised contrastive learning (HSCL) framework, for
contamination-resistant anomaly detection. Specifically, HSCL hierarchically
regulates three complementary relations: sample-to-sample, sample-to-prototype,
and normal-to-abnormal relations, enlarging the discrimination between normal
and abnormal samples with a comprehensive exploration of the contaminated data.
Besides, HSCL is an end-to-end learning approach that can efficiently learn
discriminative representations without fine-tuning. HSCL achieves
state-of-the-art performance in multiple scenarios, such as one-class
classification and cross-dataset detection. Extensive ablation studies further
verify the effectiveness of each considered relation. The code is available at
https://github.com/GaoangW/HSCL.
Related papers
- FUN-AD: Fully Unsupervised Learning for Anomaly Detection with Noisy Training Data [1.0650780147044159]
We propose a novel learning-based approach for fully unsupervised anomaly detection with unlabeled and potentially contaminated training data.
Our method is motivated by two observations, that i) the pairwise feature distances between the normal samples are on average likely to be smaller than those between the anomaly samples or heterogeneous samples and ii) pairs of features mutually closest to each other are likely to be homogeneous pairs.
Building on the first observation that nearest-neighbor distances can distinguish between confident normal samples and anomalies, we propose a pseudo-labeling strategy using an iteratively reconstructed memory bank.
arXiv Detail & Related papers (2024-11-25T05:51:38Z) - Fine-grained Abnormality Prompt Learning for Zero-shot Anomaly Detection [88.34095233600719]
FAPrompt is a novel framework designed to learn Fine-grained Abnormality Prompts for more accurate ZSAD.
It substantially outperforms state-of-the-art methods by at least 3%-5% AUC/AP in both image- and pixel-level ZSAD tasks.
arXiv Detail & Related papers (2024-10-14T08:41:31Z) - Unlearnable Examples Detection via Iterative Filtering [84.59070204221366]
Deep neural networks are proven to be vulnerable to data poisoning attacks.
It is quite beneficial and challenging to detect poisoned samples from a mixed dataset.
We propose an Iterative Filtering approach for UEs identification.
arXiv Detail & Related papers (2024-08-15T13:26:13Z) - An Iterative Method for Unsupervised Robust Anomaly Detection Under Data
Contamination [24.74938110451834]
Most deep anomaly detection models are based on learning normality from datasets.
In practice, the normality assumption is often violated due to the nature of real data distributions.
We propose a learning framework to reduce this gap and achieve better normality representation.
arXiv Detail & Related papers (2023-09-18T02:36:19Z) - RoSAS: Deep Semi-Supervised Anomaly Detection with
Contamination-Resilient Continuous Supervision [21.393509817509464]
This paper proposes a novel semi-supervised anomaly detection method, which devises textitcontamination-resilient continuous supervisory signals
Our approach significantly outperforms state-of-the-art competitors by 20%-30% in AUC-PR.
arXiv Detail & Related papers (2023-07-25T04:04:49Z) - On the Universal Adversarial Perturbations for Efficient Data-free
Adversarial Detection [55.73320979733527]
We propose a data-agnostic adversarial detection framework, which induces different responses between normal and adversarial samples to UAPs.
Experimental results show that our method achieves competitive detection performance on various text classification tasks.
arXiv Detail & Related papers (2023-06-27T02:54:07Z) - Deep Contrastive One-Class Time Series Anomaly Detection [15.27593816198766]
Contrastive One-Class Anomaly detection method of time series (COCA) is proposed by authors.
It treats the original and reconstructed representations as the positive pair of negative-sample-free CL, namely "sequence contrast"
arXiv Detail & Related papers (2022-07-04T15:08:06Z) - Normality-Calibrated Autoencoder for Unsupervised Anomaly Detection on
Data Contamination [4.547161155818913]
Normality-Calibrated Autoencoder (NCAE) can boost anomaly detection performance on contaminated datasets.
NCAE adversarially generates high confident normal samples from a latent space having low entropy.
arXiv Detail & Related papers (2021-10-28T00:23:01Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.