CPNet: Cross-Parallel Network for Efficient Anomaly Detection
- URL: http://arxiv.org/abs/2108.04454v2
- Date: Wed, 11 Aug 2021 02:16:37 GMT
- Title: CPNet: Cross-Parallel Network for Efficient Anomaly Detection
- Authors: Youngsaeng Jin, David Han and Hanseok Ko
- Abstract summary: Cross-Parallel Network (CPNet) for efficient anomaly detection is proposed here to minimize computations without performance drops.
An inter-network shift module is incorporated to capture temporal relationships among sequential frames to enable more accurate future predictions.
- Score: 20.84973451610082
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Anomaly detection in video streams is a challenging problem because of the
scarcity of abnormal events and the difficulty of accurately annotating them.
To alleviate these issues, unsupervised learning-based prediction methods have
been previously applied. These approaches train the model with only normal
events and predict a future frame from a sequence of preceding frames by use of
encoder-decoder architectures so that they result in small prediction errors on
normal events but large errors on abnormal events. The architecture, however,
comes with the computational burden as some anomaly detection tasks require low
computational cost without sacrificing performance. In this paper,
Cross-Parallel Network (CPNet) for efficient anomaly detection is proposed here
to minimize computations without performance drops. It consists of N smaller
parallel U-Net, each of which is designed to handle a single input frame, to
make the calculations significantly more efficient. Additionally, an
inter-network shift module is incorporated to capture temporal relationships
among sequential frames to enable more accurate future predictions.The
quantitative results show that our model requires less computational cost than
the baseline U-Net while delivering equivalent performance in anomaly
detection.
Related papers
- Latency-aware Unified Dynamic Networks for Efficient Image Recognition [72.8951331472913]
LAUDNet is a framework to bridge the theoretical and practical efficiency gap in dynamic networks.
It integrates three primary dynamic paradigms-spatially adaptive computation, dynamic layer skipping, and dynamic channel skipping.
It can notably reduce the latency of models like ResNet by over 50% on platforms such as V100,3090, and TX2 GPUs.
arXiv Detail & Related papers (2023-08-30T10:57:41Z) - Fast Exploration of the Impact of Precision Reduction on Spiking Neural
Networks [63.614519238823206]
Spiking Neural Networks (SNNs) are a practical choice when the target hardware reaches the edge of computing.
We employ an Interval Arithmetic (IA) model to develop an exploration methodology that takes advantage of the capability of such a model to propagate the approximation error.
arXiv Detail & Related papers (2022-11-22T15:08:05Z) - Deep Subspace Encoders for Nonlinear System Identification [0.0]
We propose a method which uses a truncated prediction loss and a subspace encoder for state estimation.
We show that, under mild conditions, the proposed method is locally consistent, increases optimization stability, and achieves increased data efficiency.
arXiv Detail & Related papers (2022-10-26T16:04:38Z) - A Universal Error Measure for Input Predictions Applied to Online Graph
Problems [57.58926849872494]
We introduce a novel measure for quantifying the error in input predictions.
The measure captures errors due to absent predicted requests as well as unpredicted actual requests.
arXiv Detail & Related papers (2022-05-25T15:24:03Z) - AEGNN: Asynchronous Event-based Graph Neural Networks [54.528926463775946]
Event-based Graph Neural Networks generalize standard GNNs to process events as "evolving"-temporal graphs.
AEGNNs are easily trained on synchronous inputs and can be converted to efficient, "asynchronous" networks at test time.
arXiv Detail & Related papers (2022-03-31T16:21:12Z) - A Semi-Supervised Approach for Abnormal Event Prediction on Large
Operational Network Time-Series Data [1.544681800932596]
This paper presents a novel semi-supervised method that efficiently captures dependencies between network time series and across time points.
The method can use the limited labeled data to explicitly learn separable embedding space for normal and abnormal samples.
Experiments demonstrate that our approach significantly outperformed state-of-the-art approaches for event detection on a large real-world network log.
arXiv Detail & Related papers (2021-10-14T18:33:57Z) - Mitigating Performance Saturation in Neural Marked Point Processes:
Architectures and Loss Functions [50.674773358075015]
We propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers.
We show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
arXiv Detail & Related papers (2021-07-07T16:59:14Z) - Improving Video Instance Segmentation by Light-weight Temporal
Uncertainty Estimates [11.580916951856256]
We present a time-dynamic approach to model uncertainties of instance segmentation networks.
We apply this approach to the detection of false positives and the estimation of prediction quality.
The proposed method only requires a readily trained neural network and video sequence input.
arXiv Detail & Related papers (2020-12-14T13:39:05Z) - AQD: Towards Accurate Fully-Quantized Object Detection [94.06347866374927]
We propose an Accurate Quantized object Detection solution, termed AQD, to get rid of floating-point computation.
Our AQD achieves comparable or even better performance compared with the full-precision counterpart under extremely low-bit schemes.
arXiv Detail & Related papers (2020-07-14T09:07:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.