Deep Contrastive One-Class Time Series Anomaly Detection
- URL: http://arxiv.org/abs/2207.01472v3
- Date: Sun, 16 Apr 2023 08:57:57 GMT
- Title: Deep Contrastive One-Class Time Series Anomaly Detection
- Authors: Rui Wang, Chongwei Liu, Xudong Mou, Kai Gao, Xiaohui Guo, Pin Liu,
Tianyu Wo, Xudong Liu
- Abstract summary: Contrastive One-Class Anomaly detection method of time series (COCA) is proposed by authors.
It treats the original and reconstructed representations as the positive pair of negative-sample-free CL, namely "sequence contrast"
- Score: 15.27593816198766
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The accumulation of time-series data and the absence of labels make
time-series Anomaly Detection (AD) a self-supervised deep learning task.
Single-normality-assumption-based methods, which reveal only a certain aspect
of the whole normality, are incapable of tasks involved with a large number of
anomalies. Specifically, Contrastive Learning (CL) methods distance negative
pairs, many of which consist of both normal samples, thus reducing the AD
performance. Existing multi-normality-assumption-based methods are usually
two-staged, firstly pre-training through certain tasks whose target may differ
from AD, limiting their performance. To overcome the shortcomings, a deep
Contrastive One-Class Anomaly detection method of time series (COCA) is
proposed by authors, following the normality assumptions of CL and one-class
classification. It treats the original and reconstructed representations as the
positive pair of negative-sample-free CL, namely "sequence contrast". Next,
invariance terms and variance terms compose a contrastive one-class loss
function in which the loss of the assumptions is optimized by invariance terms
simultaneously and the "hypersphere collapse" is prevented by variance terms.
In addition, extensive experiments on two real-world time-series datasets show
the superior performance of the proposed method achieves state-of-the-art.
Related papers
- FUN-AD: Fully Unsupervised Learning for Anomaly Detection with Noisy Training Data [1.0650780147044159]
We propose a novel learning-based approach for fully unsupervised anomaly detection with unlabeled and potentially contaminated training data.
Our method is motivated by two observations, that i) the pairwise feature distances between the normal samples are on average likely to be smaller than those between the anomaly samples or heterogeneous samples and ii) pairs of features mutually closest to each other are likely to be homogeneous pairs.
Building on the first observation that nearest-neighbor distances can distinguish between confident normal samples and anomalies, we propose a pseudo-labeling strategy using an iteratively reconstructed memory bank.
arXiv Detail & Related papers (2024-11-25T05:51:38Z) - Fine-grained Abnormality Prompt Learning for Zero-shot Anomaly Detection [88.34095233600719]
FAPrompt is a novel framework designed to learn Fine-grained Abnormality Prompts for more accurate ZSAD.
It substantially outperforms state-of-the-art methods by at least 3%-5% AUC/AP in both image- and pixel-level ZSAD tasks.
arXiv Detail & Related papers (2024-10-14T08:41:31Z) - Anomaly Detection by Context Contrasting [57.695202846009714]
Anomaly detection focuses on identifying samples that deviate from the norm.
Recent advances in self-supervised learning have shown great promise in this regard.
We propose Con$$, which learns through context augmentations.
arXiv Detail & Related papers (2024-05-29T07:59:06Z) - CARLA: Self-supervised Contrastive Representation Learning for Time Series Anomaly Detection [53.83593870825628]
One main challenge in time series anomaly detection (TSAD) is the lack of labelled data in many real-life scenarios.
Most of the existing anomaly detection methods focus on learning the normal behaviour of unlabelled time series in an unsupervised manner.
We introduce a novel end-to-end self-supervised ContrAstive Representation Learning approach for time series anomaly detection.
arXiv Detail & Related papers (2023-08-18T04:45:56Z) - Prediction Error-based Classification for Class-Incremental Learning [39.91805363069707]
We introduce Prediction Error-based Classification (PEC)
PEC computes a class score by measuring the prediction error of a model trained to replicate the outputs of a frozen random neural network on data from that class.
PEC offers several practical advantages, including sample efficiency, ease of tuning, and effectiveness even when data are presented one class at a time.
arXiv Detail & Related papers (2023-05-30T07:43:35Z) - Time-series Anomaly Detection via Contextual Discriminative Contrastive
Learning [0.0]
One-class classification methods are commonly used for anomaly detection tasks.
We propose a novel approach inspired by the loss function of DeepSVDD.
We combine our approach with a deterministic contrastive loss from Neutral AD, a promising self-supervised learning anomaly detection approach.
arXiv Detail & Related papers (2023-04-16T21:36:19Z) - Hierarchical Semi-Supervised Contrastive Learning for
Contamination-Resistant Anomaly Detection [81.07346419422605]
Anomaly detection aims at identifying deviant samples from the normal data distribution.
Contrastive learning has provided a successful way to sample representation that enables effective discrimination on anomalies.
We propose a novel hierarchical semi-supervised contrastive learning framework, for contamination-resistant anomaly detection.
arXiv Detail & Related papers (2022-07-24T18:49:26Z) - Anomaly Transformer: Time Series Anomaly Detection with Association
Discrepancy [68.86835407617778]
Anomaly Transformer achieves state-of-the-art performance on six unsupervised time series anomaly detection benchmarks.
Anomaly Transformer achieves state-of-the-art performance on six unsupervised time series anomaly detection benchmarks.
arXiv Detail & Related papers (2021-10-06T10:33:55Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.