Monte Carlo EM for Deep Time Series Anomaly Detection
- URL: http://arxiv.org/abs/2112.14436v1
- Date: Wed, 29 Dec 2021 07:52:36 GMT
- Title: Monte Carlo EM for Deep Time Series Anomaly Detection
- Authors: Fran\c{c}ois-Xavier Aubet, Daniel Z\"ugner, Jan Gasthaus
- Abstract summary: Time series data are often corrupted by outliers or other kinds of anomalies.
Recent approaches to anomaly detection and forecasting assume that the proportion of anomalies in the training data is small enough to ignore.
We present a technique for augmenting existing time series models so that they explicitly account for anomalies in the training data.
- Score: 6.312089019297173
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Time series data are often corrupted by outliers or other kinds of anomalies.
Identifying the anomalous points can be a goal on its own (anomaly detection),
or a means to improving performance of other time series tasks (e.g.
forecasting). Recent deep-learning-based approaches to anomaly detection and
forecasting commonly assume that the proportion of anomalies in the training
data is small enough to ignore, and treat the unlabeled data as coming from the
nominal data distribution. We present a simple yet effective technique for
augmenting existing time series models so that they explicitly account for
anomalies in the training data. By augmenting the training data with a latent
anomaly indicator variable whose distribution is inferred while training the
underlying model using Monte Carlo EM, our method simultaneously infers
anomalous points while improving model performance on nominal data. We
demonstrate the effectiveness of the approach by combining it with a simple
feed-forward forecasting model. We investigate how anomalies in the train set
affect the training of forecasting models, which are commonly used for time
series anomaly detection, and show that our method improves the training of the
model.
Related papers
- Learning Augmentation Policies from A Model Zoo for Time Series Forecasting [58.66211334969299]
We introduce AutoTSAug, a learnable data augmentation method based on reinforcement learning.
By augmenting the marginal samples with a learnable policy, AutoTSAug substantially improves forecasting performance.
arXiv Detail & Related papers (2024-09-10T07:34:19Z) - RobustTSF: Towards Theory and Design of Robust Time Series Forecasting
with Anomalies [28.59935971037066]
We develop methods to automatically learn a robust forecasting model from contaminated data.
Based on our analyses, we propose a simple and efficient algorithm to learn a robust forecasting model.
arXiv Detail & Related papers (2024-02-03T05:13:09Z) - Data Attribution for Diffusion Models: Timestep-induced Bias in Influence Estimation [53.27596811146316]
Diffusion models operate over a sequence of timesteps instead of instantaneous input-output relationships in previous contexts.
We present Diffusion-TracIn that incorporates this temporal dynamics and observe that samples' loss gradient norms are highly dependent on timestep.
We introduce Diffusion-ReTrac as a re-normalized adaptation that enables the retrieval of training samples more targeted to the test sample of interest.
arXiv Detail & Related papers (2024-01-17T07:58:18Z) - Graph Spatiotemporal Process for Multivariate Time Series Anomaly
Detection with Missing Values [67.76168547245237]
We introduce a novel framework called GST-Pro, which utilizes a graphtemporal process and anomaly scorer to detect anomalies.
Our experimental results show that the GST-Pro method can effectively detect anomalies in time series data and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-01-11T10:10:16Z) - LARA: A Light and Anti-overfitting Retraining Approach for Unsupervised
Time Series Anomaly Detection [49.52429991848581]
We propose a Light and Anti-overfitting Retraining Approach (LARA) for deep variational auto-encoder based time series anomaly detection methods (VAEs)
This work aims to make three novel contributions: 1) the retraining process is formulated as a convex problem and can converge at a fast rate as well as prevent overfitting; 2) designing a ruminate block, which leverages the historical data without the need to store them; and 3) mathematically proving that when fine-tuning the latent vector and reconstructed data, the linear formations can achieve the least adjusting errors between the ground truths and the fine-tuned ones.
arXiv Detail & Related papers (2023-10-09T12:36:16Z) - Augment to Detect Anomalies with Continuous Labelling [10.646747658653785]
Anomaly detection is to recognize samples that differ in some respect from the training observations.
Recent state-of-the-art deep learning-based anomaly detection methods suffer from high computational cost, complexity, unstable training procedures, and non-trivial implementation.
We leverage a simple learning procedure that trains a lightweight convolutional neural network, reaching state-of-the-art performance in anomaly detection.
arXiv Detail & Related papers (2022-07-03T20:11:51Z) - MAD: Self-Supervised Masked Anomaly Detection Task for Multivariate Time
Series [14.236092062538653]
Masked Anomaly Detection (MAD) is a general self-supervised learning task for multivariate time series anomaly detection.
By randomly masking a portion of the inputs and training a model to estimate them, MAD is an improvement over the traditional left-to-right next step prediction (NSP) task.
Our experimental results demonstrate that MAD can achieve better anomaly detection rates over traditional NSP approaches.
arXiv Detail & Related papers (2022-05-04T14:55:42Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Deep Visual Anomaly detection with Negative Learning [18.79849041106952]
In this paper, we propose anomaly detection with negative learning (ADNL), which employs the negative learning concept for the enhancement of anomaly detection.
The idea is to limit the reconstruction capability of a generative model using the given a small amount of anomaly examples.
This way, the network not only learns to reconstruct normal data but also encloses the normal distribution far from the possible distribution of anomalies.
arXiv Detail & Related papers (2021-05-24T01:48:44Z) - TadGAN: Time Series Anomaly Detection Using Generative Adversarial
Networks [73.01104041298031]
TadGAN is an unsupervised anomaly detection approach built on Generative Adversarial Networks (GANs)
To capture the temporal correlations of time series, we use LSTM Recurrent Neural Networks as base models for Generators and Critics.
To demonstrate the performance and generalizability of our approach, we test several anomaly scoring techniques and report the best-suited one.
arXiv Detail & Related papers (2020-09-16T15:52:04Z) - Anomaly Detection at Scale: The Case for Deep Distributional Time Series
Models [14.621700495712647]
Main novelty in our approach is that instead of modeling time series consisting of real values or vectors of real values, we model time series of probability distributions over real values (or vectors)
Our method is amenable to streaming anomaly detection and scales to monitoring for anomalies on millions of time series.
We show that we outperform popular open-source anomaly detection tools by up to 17% average improvement for a real-world data set.
arXiv Detail & Related papers (2020-07-30T15:48:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.