Detecting Anomalies within Time Series using Local Neural
Transformations
- URL: http://arxiv.org/abs/2202.03944v1
- Date: Tue, 8 Feb 2022 15:51:31 GMT
- Title: Detecting Anomalies within Time Series using Local Neural
Transformations
- Authors: Tim Schneider, Chen Qiu, Marius Kloft, Decky Aspandi Latif, Steffen
Staab, Stephan Mandt, Maja Rudolph
- Abstract summary: Local Neural Transformations(LNT) is a method learning local transformations of time series from data.
LNT produces an anomaly score for each time step and thus can be used to detect anomalies within time series.
Our experiments demonstrate that LNT can find anomalies in speech segments from the LibriSpeech data set and better detect interruptions to cyber-physical systems than previous work.
- Score: 30.668488830909936
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop a new method to detect anomalies within time series, which is
essential in many application domains, reaching from self-driving cars,
finance, and marketing to medical diagnosis and epidemiology. The method is
based on self-supervised deep learning that has played a key role in
facilitating deep anomaly detection on images, where powerful image
transformations are available. However, such transformations are widely
unavailable for time series. Addressing this, we develop Local Neural
Transformations(LNT), a method learning local transformations of time series
from data. The method produces an anomaly score for each time step and thus can
be used to detect anomalies within time series. We prove in a theoretical
analysis that our novel training objective is more suitable for transformation
learning than previous deep Anomaly detection(AD) methods. Our experiments
demonstrate that LNT can find anomalies in speech segments from the LibriSpeech
data set and better detect interruptions to cyber-physical systems than
previous work. Visualization of the learned transformations gives insight into
the type of transformations that LNT learns.
Related papers
- State-Space Modeling in Long Sequence Processing: A Survey on Recurrence in the Transformer Era [59.279784235147254]
This survey provides an in-depth summary of the latest approaches that are based on recurrent models for sequential data processing.
The emerging picture suggests that there is room for thinking of novel routes, constituted by learning algorithms which depart from the standard Backpropagation Through Time.
arXiv Detail & Related papers (2024-06-13T12:51:22Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - Graph Spatiotemporal Process for Multivariate Time Series Anomaly
Detection with Missing Values [67.76168547245237]
We introduce a novel framework called GST-Pro, which utilizes a graphtemporal process and anomaly scorer to detect anomalies.
Our experimental results show that the GST-Pro method can effectively detect anomalies in time series data and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-01-11T10:10:16Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Do Deep Neural Networks Contribute to Multivariate Time Series Anomaly
Detection? [12.419938668514042]
We study the anomaly detection performance of sixteen conventional, machine learning-based and, deep neural network approaches.
By analyzing and comparing the performance of each of the sixteen methods, we show that no family of methods outperforms the others.
arXiv Detail & Related papers (2022-04-04T16:32:49Z) - Time-Series Anomaly Detection with Implicit Neural Representation [0.38073142980733]
Implicit Neural Representation-based Anomaly Detection (INRAD) is proposed.
We train a simple multi-layer perceptron that takes time as input and outputs corresponding values at that time.
Then we utilize the representation error as an anomaly score for detecting anomalies.
arXiv Detail & Related papers (2022-01-28T06:17:24Z) - Neural Contextual Anomaly Detection for Time Series [7.523820334642732]
We introduce Neural Contextual Anomaly Detection (NCAD), a framework for anomaly detection on time series.
NCAD scales seamlessly from the unsupervised to supervised setting.
We demonstrate empirically on standard benchmark datasets that our approach obtains a state-of-the-art performance.
arXiv Detail & Related papers (2021-07-16T04:33:53Z) - Neural Transformation Learning for Deep Anomaly Detection Beyond Images [24.451389236365152]
We present a simple end-to-end procedure for anomaly detection with learnable transformations.
The key idea is to embed the transformed data into a semantic space such that the transformed data still resemble their untransformed form.
Our method learns domain-specific transformations and detects anomalies more accurately than previous work.
arXiv Detail & Related papers (2021-03-30T15:38:18Z) - Self-Supervised Out-of-Distribution Detection in Brain CT Scans [46.78055929759839]
We propose a novel self-supervised learning technique for anomaly detection.
Our architecture largely consists of two parts: 1) Reconstruction and 2) predicting geometric transformations.
In the test time, the geometric transformation predictor can assign the anomaly score by calculating the error between geometric transformation and prediction.
arXiv Detail & Related papers (2020-11-10T22:21:48Z) - TadGAN: Time Series Anomaly Detection Using Generative Adversarial
Networks [73.01104041298031]
TadGAN is an unsupervised anomaly detection approach built on Generative Adversarial Networks (GANs)
To capture the temporal correlations of time series, we use LSTM Recurrent Neural Networks as base models for Generators and Critics.
To demonstrate the performance and generalizability of our approach, we test several anomaly scoring techniques and report the best-suited one.
arXiv Detail & Related papers (2020-09-16T15:52:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.