SRDCNN: Strongly Regularized Deep Convolution Neural Network
Architecture for Time-series Sensor Signal Classification Tasks
- URL: http://arxiv.org/abs/2007.06909v1
- Date: Tue, 14 Jul 2020 08:42:39 GMT
- Title: SRDCNN: Strongly Regularized Deep Convolution Neural Network
Architecture for Time-series Sensor Signal Classification Tasks
- Authors: Arijit Ukil, Antonio Jara, Leandro Marin
- Abstract summary: We present SRDCNN: Strongly Regularized Deep Convolution Neural Network (DCNN) based deep architecture to perform time series classification tasks.
The novelty of the proposed approach is that the network weights are regularized by both L1 and L2 norm penalties.
- Score: 4.950427992960756
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNN) have been successfully used to perform
classification and regression tasks, particularly in computer vision based
applications. Recently, owing to the widespread deployment of Internet of
Things (IoT), we identify that the classification tasks for time series data,
specifically from different sensors are of utmost importance. In this paper, we
present SRDCNN: Strongly Regularized Deep Convolution Neural Network (DCNN)
based deep architecture to perform time series classification tasks. The
novelty of the proposed approach is that the network weights are regularized by
both L1 and L2 norm penalties. Both of the regularization approaches jointly
address the practical issues of smaller number of training instances,
requirement of quicker training process, avoiding overfitting problem by
incorporating sparsification of weight vectors as well as through controlling
of weight values. We compare the proposed method (SRDCNN) with relevant
state-of-the-art algorithms including different DNNs using publicly available
time series classification benchmark (the UCR/UEA archive) time series datasets
and demonstrate that the proposed method provides superior performance. We feel
that SRDCNN warrants better generalization capability to the deep architecture
by profoundly controlling the network parameters to combat the training
instance insufficiency problem of real-life time series sensor signals.
Related papers
- Adaptive Spiking Neural Networks with Hybrid Coding [0.0]
Spi-temporal Neural Network (SNN) is a more energy-efficient and effective neural network compared to Artificial Neural Networks (ANNs)
Traditional SNNs utilize same neurons when processing input data across different time steps, limiting their ability to integrate and utilizetemporal information effectively.
This paper introduces a hybrid encoding approach that not only reduces the required time steps for training but also continues to improve the overall network performance.
arXiv Detail & Related papers (2024-08-22T13:58:35Z) - SMORE: Similarity-based Hyperdimensional Domain Adaptation for
Multi-Sensor Time Series Classification [17.052624039805856]
We propose SMORE, a novel resource-efficient domain adaptation (DA) algorithm for multi-sensor time series classification.
SMORE achieves on average 1.98% higher accuracy than state-of-the-art (SOTA) DNN-based DA algorithms with 18.81x faster training and 4.63x faster inference.
arXiv Detail & Related papers (2024-02-20T18:48:49Z) - Time-Parameterized Convolutional Neural Networks for Irregularly Sampled
Time Series [26.77596449192451]
Irregularly sampled time series are ubiquitous in several application domains, leading to sparse, not fully-observed and non-aligned observations.
Standard sequential neural networks (RNNs) and convolutional neural networks (CNNs) consider regular spacing between observation times, posing significant challenges to irregular time series modeling.
We parameterize convolutional layers by employing time-explicitly irregular kernels.
arXiv Detail & Related papers (2023-08-06T21:10:30Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Properties and Potential Applications of Random Functional-Linked Types
of Neural Networks [81.56822938033119]
Random functional-linked neural networks (RFLNNs) offer an alternative way of learning in deep structure.
This paper gives some insights into the properties of RFLNNs from the viewpoints of frequency domain.
We propose a method to generate a BLS network with better performance, and design an efficient algorithm for solving Poison's equation.
arXiv Detail & Related papers (2023-04-03T13:25:22Z) - Resurrecting Recurrent Neural Networks for Long Sequences [45.800920421868625]
Recurrent Neural Networks (RNNs) offer fast inference on long sequences but are hard to optimize and slow to train.
Deep state-space models (SSMs) have recently been shown to perform remarkably well on long sequence modeling tasks.
We show that careful design of deep RNNs using standard signal propagation arguments can recover the impressive performance of deep SSMs on long-range reasoning tasks.
arXiv Detail & Related papers (2023-03-11T08:53:11Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Continual Learning in Recurrent Neural Networks [67.05499844830231]
We evaluate the effectiveness of continual learning methods for processing sequential data with recurrent neural networks (RNNs)
We shed light on the particularities that arise when applying weight-importance methods, such as elastic weight consolidation, to RNNs.
We show that the performance of weight-importance methods is not directly affected by the length of the processed sequences, but rather by high working memory requirements.
arXiv Detail & Related papers (2020-06-22T10:05:12Z) - STDPG: A Spatio-Temporal Deterministic Policy Gradient Agent for Dynamic
Routing in SDN [6.27420060051673]
Dynamic routing in software-defined networking (SDN) can be viewed as a centralized decision-making problem.
We propose a novel model-free framework for dynamic routing in SDN, which is referred to as SDN-temporal deterministic policy gradient (STDPG) agent.
STDPG achieves better routing solutions in terms of average end-to-end delay.
arXiv Detail & Related papers (2020-04-21T07:19:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.