GADformer: A Transparent Transformer Model for Group Anomaly Detection on Trajectories
- URL: http://arxiv.org/abs/2303.09841v2
- Date: Thu, 25 Apr 2024 10:09:47 GMT
- Title: GADformer: A Transparent Transformer Model for Group Anomaly Detection on Trajectories
- Authors: Andreas Lohrer, Darpan Malik, Claudius Zelenka, Peer Kröger,
- Abstract summary: Group Anomaly Detection (GAD) identifies unusual pattern in groups where individual members might not be anomalous.
This paper introduces GADformer, a BERT-based model for attention-driven GAD on trajectories in unsupervised and semi-supervised settings.
We also introduce the Block-Attention-anomaly-Score (BAS) to enhance model transparency by scoring attention patterns.
- Score: 0.9971221656644376
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Group Anomaly Detection (GAD) identifies unusual pattern in groups where individual members might not be anomalous. This task is of major importance across multiple disciplines, in which also sequences like trajectories can be considered as a group. As groups become more diverse in heterogeneity and size, detecting group anomalies becomes challenging, especially without supervision. Though Recurrent Neural Networks are well established deep sequence models, their performance can decrease with increasing sequence lengths. Hence, this paper introduces GADformer, a BERT-based model for attention-driven GAD on trajectories in unsupervised and semi-supervised settings. We demonstrate how group anomalies can be detected by attention-based GAD. We also introduce the Block-Attention-anomaly-Score (BAS) to enhance model transparency by scoring attention patterns. In addition to that, synthetic trajectory generation allows various ablation studies. In extensive experiments we investigate our approach versus related works in their robustness for trajectory noise and novelties on synthetic data and three real world datasets.
Related papers
- ARC: A Generalist Graph Anomaly Detector with In-Context Learning [62.202323209244]
ARC is a generalist GAD approach that enables a one-for-all'' GAD model to detect anomalies across various graph datasets on-the-fly.
equipped with in-context learning, ARC can directly extract dataset-specific patterns from the target dataset.
Extensive experiments on multiple benchmark datasets from various domains demonstrate the superior anomaly detection performance, efficiency, and generalizability of ARC.
arXiv Detail & Related papers (2024-05-27T02:42:33Z) - End-To-End Self-tuning Self-supervised Time Series Anomaly Detection [32.746688248671084]
Time series anomaly detection (TSAD) finds many applications such as monitoring environmental sensors, industry type, patient biomarkers, etc.
A two-fold challenge for TSAD is a versatile and unsupervised model that can detect various different types of time series anomalies.
We introduce TSAP for TSA "on autoPilot", which can (self)tune hyper parameters end-to-end.
arXiv Detail & Related papers (2024-04-03T16:57:26Z) - Generating and Reweighting Dense Contrastive Patterns for Unsupervised
Anomaly Detection [59.34318192698142]
We introduce a prior-less anomaly generation paradigm and develop an innovative unsupervised anomaly detection framework named GRAD.
PatchDiff effectively expose various types of anomaly patterns.
experiments on both MVTec AD and MVTec LOCO datasets also support the aforementioned observation.
arXiv Detail & Related papers (2023-12-26T07:08:06Z) - Unraveling the "Anomaly" in Time Series Anomaly Detection: A
Self-supervised Tri-domain Solution [89.16750999704969]
Anomaly labels hinder traditional supervised models in time series anomaly detection.
Various SOTA deep learning techniques, such as self-supervised learning, have been introduced to tackle this issue.
We propose a novel self-supervised learning based Tri-domain Anomaly Detector (TriAD)
arXiv Detail & Related papers (2023-11-19T05:37:18Z) - Graph Anomaly Detection at Group Level: A Topology Pattern Enhanced
Unsupervised Approach [25.383587951822964]
This paper introduces a novel unsupervised framework for a new task called Group-level Graph Anomaly Detection (Gr-GAD)
The proposed framework first employs a variant of Graph AutoEncoder (GAE) to locate anchor nodes that belong to potential anomaly groups by capturing long-range inconsistencies.
The experimental results on both real-world and synthetic datasets demonstrate that the proposed framework shows superior performance in identifying and localizing anomaly groups.
arXiv Detail & Related papers (2023-08-02T10:22:04Z) - DynGFN: Towards Bayesian Inference of Gene Regulatory Networks with
GFlowNets [81.75973217676986]
Gene regulatory networks (GRN) describe interactions between genes and their products that control gene expression and cellular function.
Existing methods either focus on challenge (1), identifying cyclic structure from dynamics, or on challenge (2) learning complex Bayesian posteriors over DAGs, but not both.
In this paper we leverage the fact that it is possible to estimate the "velocity" of gene expression with RNA velocity techniques to develop an approach that addresses both challenges.
arXiv Detail & Related papers (2023-02-08T16:36:40Z) - ARISE: Graph Anomaly Detection on Attributed Networks via Substructure
Awareness [70.60721571429784]
We propose a new graph anomaly detection framework on attributed networks via substructure awareness (ARISE)
ARISE focuses on the substructures in the graph to discern abnormalities.
Experiments show that ARISE greatly improves detection performance compared to state-of-the-art attributed networks anomaly detection (ANAD) algorithms.
arXiv Detail & Related papers (2022-11-28T12:17:40Z) - Stochastic Aggregation in Graph Neural Networks [9.551282469099887]
Graph neural networks (GNNs) manifest pathologies including over-smoothing and limited power discriminating.
We present a unifying framework for aggregation (STAG) in GNNs, where noise is (adaptively) injected into the aggregation process from the neighborhood to form node embeddings.
arXiv Detail & Related papers (2021-02-25T02:52:03Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z) - Supervised Convex Clustering [1.4610038284393165]
We propose and develop a new statistical pattern discovery method named Supervised Convex Clustering ( SCC)
SCC borrows strength from both information sources and guides towards finding more interpretable patterns via a joint convex fusion penalty.
We demonstrate the practical advantages of SCC through simulations and a case study on Alzheimer's Disease genomics.
arXiv Detail & Related papers (2020-05-25T16:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.