Classification of Long Sequential Data using Circular Dilated
Convolutional Neural Networks
- URL: http://arxiv.org/abs/2201.02143v1
- Date: Thu, 6 Jan 2022 16:58:59 GMT
- Title: Classification of Long Sequential Data using Circular Dilated
Convolutional Neural Networks
- Authors: Lei Cheng, Ruslan Khalitov, Tong Yu, and Zhirong Yang
- Abstract summary: We propose a symmetric multi-scale architecture called Circular Dilated Convolutional Neural Network (CDIL-CNN)
Our model gives classification logits in all positions, and we can apply a simple ensemble learning to achieve a better decision.
- Score: 10.014879130837912
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Classification of long sequential data is an important Machine Learning task
and appears in many application scenarios. Recurrent Neural Networks,
Transformers, and Convolutional Neural Networks are three major techniques for
learning from sequential data. Among these methods, Temporal Convolutional
Networks (TCNs) which are scalable to very long sequences have achieved
remarkable progress in time series regression. However, the performance of TCNs
for sequence classification is not satisfactory because they use a skewed
connection protocol and output classes at the last position. Such asymmetry
restricts their performance for classification which depends on the whole
sequence. In this work, we propose a symmetric multi-scale architecture called
Circular Dilated Convolutional Neural Network (CDIL-CNN), where every position
has an equal chance to receive information from other positions at the previous
layers. Our model gives classification logits in all positions, and we can
apply a simple ensemble learning to achieve a better decision. We have tested
CDIL-CNN on various long sequential datasets. The experimental results show
that our method has superior performance over many state-of-the-art approaches.
Related papers
- Time Elastic Neural Networks [2.1756081703276]
We introduce and detail an atypical neural network architecture, called time elastic neural network (teNN)
The novelty compared to classical neural network architecture is that it explicitly incorporates time warping ability.
We demonstrate that, during the training process, the teNN succeeds in reducing the number of neurons required within each cell.
arXiv Detail & Related papers (2024-05-27T09:01:30Z) - Multi-class Temporal Logic Neural Networks [8.20828081284034]
Time-series data can represent the behaviors of autonomous systems, such as drones and self-driving cars.
We propose a method that combines neural networks that represent STL specifications for multi-class classification of time-series data.
We evaluate our method on two datasets and compare it with state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-17T00:22:29Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - Do We Really Need a Learnable Classifier at the End of Deep Neural
Network? [118.18554882199676]
We study the potential of learning a neural network for classification with the classifier randomly as an ETF and fixed during training.
Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets.
arXiv Detail & Related papers (2022-03-17T04:34:28Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - SRDCNN: Strongly Regularized Deep Convolution Neural Network
Architecture for Time-series Sensor Signal Classification Tasks [4.950427992960756]
We present SRDCNN: Strongly Regularized Deep Convolution Neural Network (DCNN) based deep architecture to perform time series classification tasks.
The novelty of the proposed approach is that the network weights are regularized by both L1 and L2 norm penalties.
arXiv Detail & Related papers (2020-07-14T08:42:39Z) - Continual Learning in Recurrent Neural Networks [67.05499844830231]
We evaluate the effectiveness of continual learning methods for processing sequential data with recurrent neural networks (RNNs)
We shed light on the particularities that arise when applying weight-importance methods, such as elastic weight consolidation, to RNNs.
We show that the performance of weight-importance methods is not directly affected by the length of the processed sequences, but rather by high working memory requirements.
arXiv Detail & Related papers (2020-06-22T10:05:12Z) - Time Series Data Augmentation for Neural Networks by Time Warping with a
Discriminative Teacher [17.20906062729132]
We propose a novel time series data augmentation called guided warping.
guided warping exploits the element alignment properties of Dynamic Time Warping (DTW) and shapeDTW.
We evaluate the method on all 85 datasets in the 2015 UCR Time Series Archive with a deep convolutional neural network (CNN) and a recurrent neural network (RNN)
arXiv Detail & Related papers (2020-04-19T06:33:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.