A temporal-to-spatial deep convolutional neural network for
classification of hand movements from multichannel electromyography data
- URL: http://arxiv.org/abs/2007.10879v2
- Date: Wed, 19 Aug 2020 08:07:50 GMT
- Title: A temporal-to-spatial deep convolutional neural network for
classification of hand movements from multichannel electromyography data
- Authors: Adam Hartwell, Visakan Kadirkamanathan, Sean R. Anderson
- Abstract summary: We make the novel contribution of proposing and evaluating a design for the early processing layers in the deep CNN for multichannel sEMG.
We propose a novel temporal-to-spatial (TtS) CNN architecture, where the first layer performs convolution separately on each sEMG channel to extract temporal features.
We find that our novel TtS CNN design achieves 66.6% per-class accuracy on database 1, and 67.8% on database 2.
- Score: 0.14502611532302037
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep convolutional neural networks (CNNs) are appealing for the purpose of
classification of hand movements from surface electromyography (sEMG) data
because they have the ability to perform automated person-specific feature
extraction from raw data. In this paper, we make the novel contribution of
proposing and evaluating a design for the early processing layers in the deep
CNN for multichannel sEMG. Specifically, we propose a novel temporal-to-spatial
(TtS) CNN architecture, where the first layer performs convolution separately
on each sEMG channel to extract temporal features. This is motivated by the
idea that sEMG signals in each channel are mediated by one or a small subset of
muscles, whose temporal activation patterns are associated with the signature
features of a gesture. The temporal layer captures these signature features for
each channel separately, which are then spatially mixed in successive layers to
recognise a specific gesture. A practical advantage is that this approach also
makes the CNN simple to design for different sample rates. We use NinaPro
database 1 (27 subjects and 52 movements + rest), sampled at 100 Hz, and
database 2 (40 subjects and 40 movements + rest), sampled at 2 kHz, to evaluate
our proposed CNN design. We benchmark against a feature-based support vector
machine (SVM) classifier, two CNNs from the literature, and an additional
standard design of CNN. We find that our novel TtS CNN design achieves 66.6%
per-class accuracy on database 1, and 67.8% on database 2, and that the TtS CNN
outperforms all other compared classifiers using a statistical hypothesis test
at the 2% significance level.
Related papers
- Few-shot Learning using Data Augmentation and Time-Frequency
Transformation for Time Series Classification [6.830148185797109]
We propose a novel few-shot learning framework through data augmentation.
We also develop a sequence-spectrogram neural network (SSNN)
Our methodology demonstrates its applicability of addressing the few-shot problems for time series classification.
arXiv Detail & Related papers (2023-11-06T15:32:50Z) - Lost Vibration Test Data Recovery Using Convolutional Neural Network: A
Case Study [0.0]
This paper proposes a CNN algorithm for the Alamosa Canyon Bridge as a real structure.
Three different CNN models were considered to predict one and two malfunctioned sensors.
The accuracy of the model was increased by adding a convolutional layer.
arXiv Detail & Related papers (2022-04-11T23:24:03Z) - Keypoint Message Passing for Video-based Person Re-Identification [106.41022426556776]
Video-based person re-identification (re-ID) is an important technique in visual surveillance systems which aims to match video snippets of people captured by different cameras.
Existing methods are mostly based on convolutional neural networks (CNNs), whose building blocks either process local neighbor pixels at a time, or, when 3D convolutions are used to model temporal information, suffer from the misalignment problem caused by person movement.
In this paper, we propose to overcome the limitations of normal convolutions with a human-oriented graph method. Specifically, features located at person joint keypoints are extracted and connected as a spatial-temporal graph
arXiv Detail & Related papers (2021-11-16T08:01:16Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - A Two-Stage Approach to Device-Robust Acoustic Scene Classification [63.98724740606457]
Two-stage system based on fully convolutional neural networks (CNNs) is proposed to improve device robustness.
Our results show that the proposed ASC system attains a state-of-the-art accuracy on the development set.
Neural saliency analysis with class activation mapping gives new insights on the patterns learnt by our models.
arXiv Detail & Related papers (2020-11-03T03:27:18Z) - Efficient Arabic emotion recognition using deep neural networks [21.379338888447602]
We implement two neural architectures to address the problem of emotion recognition from speech signal.
The first is an attention-based CNN-LSTM-DNN model; the second is a deep CNN model.
The results on an Arabic speech emotion recognition task show that our innovative approach can lead to significant improvements.
arXiv Detail & Related papers (2020-10-31T19:39:37Z) - Video-based Facial Expression Recognition using Graph Convolutional
Networks [57.980827038988735]
We introduce a Graph Convolutional Network (GCN) layer into a common CNN-RNN based model for video-based facial expression recognition.
We evaluate our method on three widely-used datasets, CK+, Oulu-CASIA and MMI, and also one challenging wild dataset AFEW8.0.
arXiv Detail & Related papers (2020-10-26T07:31:51Z) - Deep learning for gravitational-wave data analysis: A resampling
white-box approach [62.997667081978825]
We apply Convolutional Neural Networks (CNNs) to detect gravitational wave (GW) signals of compact binary coalescences, using single-interferometer data from LIGO detectors.
CNNs were quite precise to detect noise but not sensitive enough to recall GW signals, meaning that CNNs are better for noise reduction than generation of GW triggers.
arXiv Detail & Related papers (2020-09-09T03:28:57Z) - Exploring Deep Hybrid Tensor-to-Vector Network Architectures for
Regression Based Speech Enhancement [53.47564132861866]
We find that a hybrid architecture, namely CNN-TT, is capable of maintaining a good quality performance with a reduced model parameter size.
CNN-TT is composed of several convolutional layers at the bottom for feature extraction to improve speech quality.
arXiv Detail & Related papers (2020-07-25T22:21:05Z) - Multistream CNN for Robust Acoustic Modeling [17.155489701060542]
Multistream CNN is a novel neural network architecture for robust acoustic modeling in speech recognition tasks.
We show consistent improvements against Kaldi's best TDNN-F model across various data sets.
In terms of real-time factor, multistream CNN outperforms the baseline TDNN-F by 15%.
arXiv Detail & Related papers (2020-05-21T05:26:15Z) - Human Activity Recognition using Multi-Head CNN followed by LSTM [1.8830374973687412]
This study presents a novel method to recognize human physical activities using CNN followed by LSTM.
By using the proposed method, we achieve state-of-the-art accuracy, which is comparable to traditional machine learning algorithms and other deep neural network algorithms.
arXiv Detail & Related papers (2020-02-21T14:29:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.