SiTGRU: Single-Tunnelled Gated Recurrent Unit for Abnormality Detection
- URL: http://arxiv.org/abs/2003.13528v1
- Date: Mon, 30 Mar 2020 14:58:13 GMT
- Title: SiTGRU: Single-Tunnelled Gated Recurrent Unit for Abnormality Detection
- Authors: Habtamu Fanta, Zhiwen Shao, Lizhuang Ma
- Abstract summary: We propose a novel version of Gated Recurrent Unit (GRU) called Single Tunnelled GRU for abnormality detection.
Our proposed optimized GRU model outperforms standard GRU and Long Short Term Memory (LSTM) networks on most metrics for detection and generalization tasks.
- Score: 29.500392184282518
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Abnormality detection is a challenging task due to the dependence on a
specific context and the unconstrained variability of practical scenarios. In
recent years, it has benefited from the powerful features learnt by deep neural
networks, and handcrafted features specialized for abnormality detectors.
However, these approaches with large complexity still have limitations in
handling long term sequential data (e.g., videos), and their learnt features do
not thoroughly capture useful information. Recurrent Neural Networks (RNNs)
have been shown to be capable of robustly dealing with temporal data in long
term sequences. In this paper, we propose a novel version of Gated Recurrent
Unit (GRU), called Single Tunnelled GRU for abnormality detection.
Particularly, the Single Tunnelled GRU discards the heavy weighted reset gate
from GRU cells that overlooks the importance of past content by only favouring
current input to obtain an optimized single gated cell model. Moreover, we
substitute the hyperbolic tangent activation in standard GRUs with sigmoid
activation, as the former suffers from performance loss in deeper networks.
Empirical results show that our proposed optimized GRU model outperforms
standard GRU and Long Short Term Memory (LSTM) networks on most metrics for
detection and generalization tasks on CUHK Avenue and UCSD datasets. The model
is also computationally efficient with reduced training and testing time over
standard RNNs.
Related papers
- NIDS Neural Networks Using Sliding Time Window Data Processing with Trainable Activations and its Generalization Capability [0.0]
This paper presents neural networks for network intrusion detection systems (NIDS) that operate on flow data preprocessed with a time window.
It requires only eleven features which do not rely on deep packet inspection and can be found in most NIDS datasets and easily obtained from conventional flow collectors.
The reported training accuracy exceeds 99% for the proposed method with as little as twenty neural network input features.
arXiv Detail & Related papers (2024-10-24T11:36:19Z) - Multivariate Time-Series Anomaly Detection based on Enhancing Graph Attention Networks with Topological Analysis [31.43159668073136]
Unsupervised anomaly detection in time series is essential in industrial applications, as it significantly reduces the need for manual intervention.
Traditional methods use Graph Neural Networks (GNNs) or Transformers to analyze spatial while RNNs to model temporal dependencies.
This paper introduces a novel temporal model built on an enhanced Graph Attention Network (GAT) for multivariate time series anomaly detection called TopoGDN.
arXiv Detail & Related papers (2024-08-23T14:06:30Z) - Exploiting T-norms for Deep Learning in Autonomous Driving [60.205021207641174]
We show how it is possible to define memory-efficient t-norm-based losses, allowing for exploiting t-norms for the task of event detection in autonomous driving.
arXiv Detail & Related papers (2024-02-17T18:51:21Z) - Normality Learning-based Graph Anomaly Detection via Multi-Scale
Contrastive Learning [61.57383634677747]
Graph anomaly detection (GAD) has attracted increasing attention in machine learning and data mining.
Here, we propose a normality learning-based GAD framework via multi-scale contrastive learning networks (NLGAD for abbreviation)
Notably, the proposed algorithm improves the detection performance (up to 5.89% AUC gain) compared with the state-of-the-art methods.
arXiv Detail & Related papers (2023-09-12T08:06:04Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - An Improved Time Feedforward Connections Recurrent Neural Networks [3.0965505512285967]
Recurrent Neural Networks (RNNs) have been widely applied to deal with temporal problems, such as flood forecasting and financial data processing.
Traditional RNNs models amplify the gradient issue due to the strict time serial dependency.
An improved Time Feedforward Connections Recurrent Neural Networks (TFC-RNNs) model was first proposed to address the gradient issue.
A novel cell structure named Single Gate Recurrent Unit (SGRU) was presented to reduce the number of parameters for RNNs cell.
arXiv Detail & Related papers (2022-11-03T09:32:39Z) - Memory-augmented Adversarial Autoencoders for Multivariate Time-series
Anomaly Detection with Deep Reconstruction and Prediction [4.033624665609417]
We propose MemAAE, a novel unsupervised anomaly detection method for time-series.
By jointly training two complementary proxy tasks, reconstruction and prediction, we show that detecting anomalies via multiple tasks obtains superior performance.
MemAAE achieves an overall F1 score of 0.90 on four public datasets, significantly outperforming the best baseline by 0.02.
arXiv Detail & Related papers (2021-10-15T18:29:05Z) - "Forget" the Forget Gate: Estimating Anomalies in Videos using
Self-contained Long Short-Term Memory Networks [20.211951213040937]
We present an approach of detecting anomalies in videos by learning a novel LSTM based self-contained network on normal dense optical flow.
We introduce a bi-gated, light LSTM cell by discarding the forget gate and introducing sigmoid activation.
Removing the forget gate results in a simplified and undemanding LSTM cell with improved performance effectiveness and computational efficiency.
arXiv Detail & Related papers (2021-04-03T20:43:49Z) - Neural Pruning via Growing Regularization [82.9322109208353]
We extend regularization to tackle two central problems of pruning: pruning schedule and weight importance scoring.
Specifically, we propose an L2 regularization variant with rising penalty factors and show it can bring significant accuracy gains.
The proposed algorithms are easy to implement and scalable to large datasets and networks in both structured and unstructured pruning.
arXiv Detail & Related papers (2020-12-16T20:16:28Z) - TadGAN: Time Series Anomaly Detection Using Generative Adversarial
Networks [73.01104041298031]
TadGAN is an unsupervised anomaly detection approach built on Generative Adversarial Networks (GANs)
To capture the temporal correlations of time series, we use LSTM Recurrent Neural Networks as base models for Generators and Critics.
To demonstrate the performance and generalizability of our approach, we test several anomaly scoring techniques and report the best-suited one.
arXiv Detail & Related papers (2020-09-16T15:52:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.