DeepFIB: Self-Imputation for Time Series Anomaly Detection
- URL: http://arxiv.org/abs/2112.06247v1
- Date: Sun, 12 Dec 2021 14:28:06 GMT
- Title: DeepFIB: Self-Imputation for Time Series Anomaly Detection
- Authors: Minhao Liu, Zhijian Xu, Qiang Xu
- Abstract summary: Time series anomaly detection (AD) plays an essential role in various applications, e.g., fraud detection in finance and healthcare monitoring.
We propose a novel self-supervised learning technique for AD in time series, namely emphDeepFIB.
We show that DeepFIB outperforms state-of-the-art methods by a large margin, achieving up to $65.2%$ relative improvement in F1-score.
- Score: 5.4921159672644775
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Time series (TS) anomaly detection (AD) plays an essential role in various
applications, e.g., fraud detection in finance and healthcare monitoring. Due
to the inherently unpredictable and highly varied nature of anomalies and the
lack of anomaly labels in historical data, the AD problem is typically
formulated as an unsupervised learning problem. The performance of existing
solutions is often not satisfactory, especially in data-scarce scenarios. To
tackle this problem, we propose a novel self-supervised learning technique for
AD in time series, namely \emph{DeepFIB}. We model the problem as a \emph{Fill
In the Blank} game by masking some elements in the TS and imputing them with
the rest. Considering the two common anomaly shapes (point- or
sequence-outliers) in TS data, we implement two masking strategies with many
self-generated training samples. The corresponding self-imputation networks can
extract more robust temporal relations than existing AD solutions and
effectively facilitate identifying the two types of anomalies. For continuous
outliers, we also propose an anomaly localization algorithm that dramatically
reduces AD errors. Experiments on various real-world TS datasets demonstrate
that DeepFIB outperforms state-of-the-art methods by a large margin, achieving
up to $65.2\%$ relative improvement in F1-score.
Related papers
- See it, Think it, Sorted: Large Multimodal Models are Few-shot Time Series Anomaly Analyzers [23.701716999879636]
Time series anomaly detection (TSAD) is becoming increasingly vital due to the rapid growth of time series data.
We introduce a pioneering framework called the Time Series Anomaly Multimodal Analyzer (TAMA) to enhance both the detection and interpretation of anomalies.
arXiv Detail & Related papers (2024-11-04T10:28:41Z) - Graph Spatiotemporal Process for Multivariate Time Series Anomaly
Detection with Missing Values [67.76168547245237]
We introduce a novel framework called GST-Pro, which utilizes a graphtemporal process and anomaly scorer to detect anomalies.
Our experimental results show that the GST-Pro method can effectively detect anomalies in time series data and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-01-11T10:10:16Z) - Unraveling the "Anomaly" in Time Series Anomaly Detection: A
Self-supervised Tri-domain Solution [89.16750999704969]
Anomaly labels hinder traditional supervised models in time series anomaly detection.
Various SOTA deep learning techniques, such as self-supervised learning, have been introduced to tackle this issue.
We propose a novel self-supervised learning based Tri-domain Anomaly Detector (TriAD)
arXiv Detail & Related papers (2023-11-19T05:37:18Z) - CARLA: Self-supervised Contrastive Representation Learning for Time Series Anomaly Detection [53.83593870825628]
One main challenge in time series anomaly detection (TSAD) is the lack of labelled data in many real-life scenarios.
Most of the existing anomaly detection methods focus on learning the normal behaviour of unlabelled time series in an unsupervised manner.
We introduce a novel end-to-end self-supervised ContrAstive Representation Learning approach for time series anomaly detection.
arXiv Detail & Related papers (2023-08-18T04:45:56Z) - Are we certain it's anomalous? [57.729669157989235]
Anomaly detection in time series is a complex task since anomalies are rare due to highly non-linear temporal correlations.
Here we propose the novel use of Hyperbolic uncertainty for Anomaly Detection (HypAD)
HypAD learns self-supervisedly to reconstruct the input signal.
arXiv Detail & Related papers (2022-11-16T21:31:39Z) - Data-Efficient and Interpretable Tabular Anomaly Detection [54.15249463477813]
We propose a novel framework that adapts a white-box model class, Generalized Additive Models, to detect anomalies.
In addition, the proposed framework, DIAD, can incorporate a small amount of labeled data to further boost anomaly detection performances in semi-supervised settings.
arXiv Detail & Related papers (2022-03-03T22:02:56Z) - Memory-augmented Adversarial Autoencoders for Multivariate Time-series
Anomaly Detection with Deep Reconstruction and Prediction [4.033624665609417]
We propose MemAAE, a novel unsupervised anomaly detection method for time-series.
By jointly training two complementary proxy tasks, reconstruction and prediction, we show that detecting anomalies via multiple tasks obtains superior performance.
MemAAE achieves an overall F1 score of 0.90 on four public datasets, significantly outperforming the best baseline by 0.02.
arXiv Detail & Related papers (2021-10-15T18:29:05Z) - Time Series Anomaly Detection with label-free Model Selection [0.6303112417588329]
We propose LaF-AD, a novel anomaly detection algorithm with label-free model selection for unlabeled times-series data.
Our algorithm is easily parallelizable, more robust for ill-conditioned and seasonal data, and highly scalable for a large number of anomaly models.
arXiv Detail & Related papers (2021-06-11T00:21:06Z) - TadGAN: Time Series Anomaly Detection Using Generative Adversarial
Networks [73.01104041298031]
TadGAN is an unsupervised anomaly detection approach built on Generative Adversarial Networks (GANs)
To capture the temporal correlations of time series, we use LSTM Recurrent Neural Networks as base models for Generators and Critics.
To demonstrate the performance and generalizability of our approach, we test several anomaly scoring techniques and report the best-suited one.
arXiv Detail & Related papers (2020-09-16T15:52:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.