ATCN: Resource-Efficient Processing of Time Series on Edge
- URL: http://arxiv.org/abs/2011.05260v4
- Date: Mon, 21 Mar 2022 22:08:22 GMT
- Title: ATCN: Resource-Efficient Processing of Time Series on Edge
- Authors: Mohammadreza Baharani, Hamed Tabkhi
- Abstract summary: This paper presents a scalable deep learning model called Agile Temporal Convolutional Network (ATCN) for high-accurate fast classification and time series prediction.
ATCN is primarily designed for embedded edge devices with very limited performance and memory, such as wearable biomedical devices and real-time reliability monitoring systems.
- Score: 3.883460584034766
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a scalable deep learning model called Agile Temporal
Convolutional Network (ATCN) for high-accurate fast classification and time
series prediction in resource-constrained embedded systems. ATCN is a family of
compact networks with formalized hyperparameters that enable
application-specific adjustments to be made to the model architecture. It is
primarily designed for embedded edge devices with very limited performance and
memory, such as wearable biomedical devices and real-time reliability
monitoring systems. ATCN makes fundamental improvements over the mainstream
temporal convolutional neural networks, including residual connections to
increase the network depth and accuracy, and the incorporation of separable
depth-wise convolution to reduce the computational complexity of the model. As
part of the present work, two ATCN families, namely T0, and T1 are also
presented and evaluated on different ranges of embedded processors - Cortex-M7
and Cortex-A57 processor. An evaluation of the ATCN models against the
best-in-class InceptionTime and MiniRocket shows that ATCN almost maintains
accuracy while improving the execution time on a broad range of embedded and
cyber-physical applications with demand for real-time processing on the
embedded edge. At the same time, in contrast to existing solutions, ATCN is the
first time-series classifier based on deep learning that can be run bare-metal
on embedded microcontrollers (Cortex-M7) with limited computational performance
and memory capacity while delivering state-of-the-art accuracy.
Related papers
- TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch [72.26822499434446]
Auto-Train-Once (ATO) is an innovative network pruning algorithm designed to automatically reduce the computational and storage costs of DNNs.
We provide a comprehensive convergence analysis as well as extensive experiments, and the results show that our approach achieves state-of-the-art performance across various model architectures.
arXiv Detail & Related papers (2024-03-21T02:33:37Z) - RLEEGNet: Integrating Brain-Computer Interfaces with Adaptive AI for
Intuitive Responsiveness and High-Accuracy Motor Imagery Classification [0.0]
We introduce a framework that leverages Reinforcement Learning with Deep Q-Networks (DQN) for classification tasks.
We present a preprocessing technique for multiclass motor imagery (MI) classification in a One-Versus-The-Rest (OVR) manner.
The integration of DQN with a 1D-CNN-LSTM architecture optimize the decision-making process in real-time.
arXiv Detail & Related papers (2024-02-09T02:03:13Z) - NAC-TCN: Temporal Convolutional Networks with Causal Dilated
Neighborhood Attention for Emotion Understanding [60.74434735079253]
We propose a method known as Neighborhood Attention with Convolutions TCN (NAC-TCN)
We accomplish this by introducing a causal version of Dilated Neighborhood Attention while incorporating it with convolutions.
Our model achieves comparable, better, or state-of-the-art performance over TCNs, TCAN, LSTMs, and GRUs while requiring fewer parameters on standard emotion recognition datasets.
arXiv Detail & Related papers (2023-12-12T18:41:30Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Training Integer-Only Deep Recurrent Neural Networks [3.1829446824051195]
We present a quantization-aware training method for obtaining a highly accurate integer-only recurrent neural network (iRNN)
Our approach supports layer normalization, attention, and an adaptive piecewise linear (PWL) approximation of activation functions.
The proposed method enables RNN-based language models to run on edge devices with $2times$ improvement in runtime.
arXiv Detail & Related papers (2022-12-22T15:22:36Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - APNN-TC: Accelerating Arbitrary Precision Neural Networks on Ampere GPU
Tensor Cores [19.516279899089735]
We introduce the first Arbitrary Precision Neural Network framework (APNN-TC) to fully exploit quantization benefits on Ampere Cores.
APNN-TC supports arbitrary short bit-width computation with int1 compute primitives and XOR/AND operations.
It can achieve significant speedup overLAS CUTS kernels and various NN models, such as ResNet and VGG.
arXiv Detail & Related papers (2021-06-23T05:39:34Z) - SRDCNN: Strongly Regularized Deep Convolution Neural Network
Architecture for Time-series Sensor Signal Classification Tasks [4.950427992960756]
We present SRDCNN: Strongly Regularized Deep Convolution Neural Network (DCNN) based deep architecture to perform time series classification tasks.
The novelty of the proposed approach is that the network weights are regularized by both L1 and L2 norm penalties.
arXiv Detail & Related papers (2020-07-14T08:42:39Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.