Lossy Compression for Robust Unsupervised Time-Series Anomaly Detection
- URL: http://arxiv.org/abs/2212.02303v1
- Date: Mon, 5 Dec 2022 14:29:16 GMT
- Title: Lossy Compression for Robust Unsupervised Time-Series Anomaly Detection
- Authors: Christopher P. Ley, Jorge F. Silva
- Abstract summary: We propose a Lossy Causal Temporal Convolutional Neural Network Autoencoder for anomaly detection.
Our framework uses a rate-distortion loss and an entropy bottleneck to learn a compressed latent representation for the task.
- Score: 4.873362301533825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A new Lossy Causal Temporal Convolutional Neural Network Autoencoder for
anomaly detection is proposed in this work. Our framework uses a
rate-distortion loss and an entropy bottleneck to learn a compressed latent
representation for the task. The main idea of using a rate-distortion loss is
to introduce representation flexibility that ignores or becomes robust to
unlikely events with distinctive patterns, such as anomalies. These anomalies
manifest as unique distortion features that can be accurately detected in
testing conditions. This new architecture allows us to train a fully
unsupervised model that has high accuracy in detecting anomalies from a
distortion score despite being trained with some portion of unlabelled
anomalous data. This setting is in stark contrast to many of the
state-of-the-art unsupervised methodologies that require the model to be only
trained on "normal data". We argue that this partially violates the concept of
unsupervised training for anomaly detection as the model uses an informed
decision that selects what is normal from abnormal for training. Additionally,
there is evidence to suggest it also effects the models ability at
generalisation. We demonstrate that models that succeed in the paradigm where
they are only trained on normal data fail to be robust when anomalous data is
injected into the training. In contrast, our compression-based approach
converges to a robust representation that tolerates some anomalous distortion.
The robust representation achieved by a model using a rate-distortion loss can
be used in a more realistic unsupervised anomaly detection scheme.
Related papers
- AnomalyDiffusion: Few-Shot Anomaly Image Generation with Diffusion Model [59.08735812631131]
Anomaly inspection plays an important role in industrial manufacture.
Existing anomaly inspection methods are limited in their performance due to insufficient anomaly data.
We propose AnomalyDiffusion, a novel diffusion-based few-shot anomaly generation model.
arXiv Detail & Related papers (2023-12-10T05:13:40Z) - Video Anomaly Detection via Spatio-Temporal Pseudo-Anomaly Generation : A Unified Approach [49.995833831087175]
This work proposes a novel method for generating generic Video-temporal PAs by inpainting a masked out region of an image.
In addition, we present a simple unified framework to detect real-world anomalies under the OCC setting.
Our method performs on par with other existing state-of-the-art PAs generation and reconstruction based methods under the OCC setting.
arXiv Detail & Related papers (2023-11-27T13:14:06Z) - An Iterative Method for Unsupervised Robust Anomaly Detection Under Data
Contamination [24.74938110451834]
Most deep anomaly detection models are based on learning normality from datasets.
In practice, the normality assumption is often violated due to the nature of real data distributions.
We propose a learning framework to reduce this gap and achieve better normality representation.
arXiv Detail & Related papers (2023-09-18T02:36:19Z) - Time-series Anomaly Detection via Contextual Discriminative Contrastive
Learning [0.0]
One-class classification methods are commonly used for anomaly detection tasks.
We propose a novel approach inspired by the loss function of DeepSVDD.
We combine our approach with a deterministic contrastive loss from Neutral AD, a promising self-supervised learning anomaly detection approach.
arXiv Detail & Related papers (2023-04-16T21:36:19Z) - Are we certain it's anomalous? [57.729669157989235]
Anomaly detection in time series is a complex task since anomalies are rare due to highly non-linear temporal correlations.
Here we propose the novel use of Hyperbolic uncertainty for Anomaly Detection (HypAD)
HypAD learns self-supervisedly to reconstruct the input signal.
arXiv Detail & Related papers (2022-11-16T21:31:39Z) - Augment to Detect Anomalies with Continuous Labelling [10.646747658653785]
Anomaly detection is to recognize samples that differ in some respect from the training observations.
Recent state-of-the-art deep learning-based anomaly detection methods suffer from high computational cost, complexity, unstable training procedures, and non-trivial implementation.
We leverage a simple learning procedure that trains a lightweight convolutional neural network, reaching state-of-the-art performance in anomaly detection.
arXiv Detail & Related papers (2022-07-03T20:11:51Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - SLA$^2$P: Self-supervised Anomaly Detection with Adversarial
Perturbation [77.71161225100927]
Anomaly detection is a fundamental yet challenging problem in machine learning.
We propose a novel and powerful framework, dubbed as SLA$2$P, for unsupervised anomaly detection.
arXiv Detail & Related papers (2021-11-25T03:53:43Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Learning Memory-guided Normality for Anomaly Detection [33.77435699029528]
We present an unsupervised learning approach to anomaly detection that considers the diversity of normal patterns explicitly.
We also present novel feature compactness and separateness losses to train the memory, boosting the discriminative power of both memory items and deeply learned features from normal data.
arXiv Detail & Related papers (2020-03-30T05:30:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.