Segmentation and Classification of EMG Time-Series During Reach-to-Grasp
Motion
- URL: http://arxiv.org/abs/2104.09627v1
- Date: Mon, 19 Apr 2021 20:41:06 GMT
- Title: Segmentation and Classification of EMG Time-Series During Reach-to-Grasp
Motion
- Authors: Mo Han, Mehrshad Zandigohar, Mariusz P. Furmanek, Mathew Yarossi,
Gunar Schirner, Deniz Erdogmus
- Abstract summary: We propose a framework for classifying EMG signals generated from continuous grasp movements with variations on dynamic arm/hand postures.
The proposed framework was evaluated in real-time with the accuracy variation over time presented.
- Score: 10.388787606334745
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The electromyography (EMG) signals have been widely utilized in human robot
interaction for extracting user hand and arm motion instructions. A major
challenge of the online interaction with robots is the reliable EMG recognition
from real-time data. However, previous studies mainly focused on using
steady-state EMG signals with a small number of grasp patterns to implement
classification algorithms, which is insufficient to generate robust control
regarding the dynamic muscular activity variation in practice. Introducing more
EMG variability during training and validation could implement a better
dynamic-motion detection, but only limited research focused on such
grasp-movement identification, and all of those assessments on the non-static
EMG classification require supervised ground-truth label of the movement
status. In this study, we propose a framework for classifying EMG signals
generated from continuous grasp movements with variations on dynamic arm/hand
postures, using an unsupervised motion status segmentation method. We collected
data from large gesture vocabularies with multiple dynamic motion phases to
encode the transitions from one intent to another based on common sequences of
the grasp movements. Two classifiers were constructed for identifying the
motion-phase label and grasp-type label, where the dynamic motion phases were
segmented and labeled in an unsupervised manner. The proposed framework was
evaluated in real-time with the accuracy variation over time presented, which
was shown to be efficient due to the high degree of freedom of the EMG data.
Related papers
- FORS-EMG: A Novel sEMG Dataset for Hand Gesture Recognition Across Multiple Forearm Orientations [1.444899524297657]
Surface electromy (sEMG) signal holds great potential in the research fields of gesture recognition and the development of robust prosthetic hands.
The sEMG signal is compromised with physiological or dynamic factors such as forearm orientations, forearm displacement, limb position, etc.
In this paper, we have proposed a dataset of electrode sEMG signals to evaluate common daily living hand gestures performed with three forearm orientations.
arXiv Detail & Related papers (2024-09-03T14:23:06Z) - DyG-Mamba: Continuous State Space Modeling on Dynamic Graphs [59.434893231950205]
Dynamic graph learning aims to uncover evolutionary laws in real-world systems.
We propose DyG-Mamba, a new continuous state space model for dynamic graph learning.
We show that DyG-Mamba achieves state-of-the-art performance on most datasets.
arXiv Detail & Related papers (2024-08-13T15:21:46Z) - Neuromorphic Vision-based Motion Segmentation with Graph Transformer Neural Network [4.386534439007928]
We propose a novel event-based motion segmentation algorithm using a Graph Transformer Neural Network, dubbed GTNN.
Our proposed algorithm processes event streams as 3D graphs by a series nonlinear transformations to unveil local and global correlations between events.
We show that GTNN outperforms state-of-the-art methods in the presence of dynamic background variations, motion patterns, and multiple dynamic objects with varying sizes and velocities.
arXiv Detail & Related papers (2024-04-16T22:44:29Z) - MotionTrack: Learning Motion Predictor for Multiple Object Tracking [68.68339102749358]
We introduce a novel motion-based tracker, MotionTrack, centered around a learnable motion predictor.
Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on datasets such as Dancetrack and SportsMOT.
arXiv Detail & Related papers (2023-06-05T04:24:11Z) - Weakly-supervised Action Transition Learning for Stochastic Human Motion
Prediction [81.94175022575966]
We introduce the task of action-driven human motion prediction.
It aims to predict multiple plausible future motions given a sequence of action labels and a short motion history.
arXiv Detail & Related papers (2022-05-31T08:38:07Z) - Unsupervised Motion Representation Learning with Capsule Autoencoders [54.81628825371412]
Motion Capsule Autoencoder (MCAE) models motion in a two-level hierarchy.
MCAE is evaluated on a novel Trajectory20 motion dataset and various real-world skeleton-based human action datasets.
arXiv Detail & Related papers (2021-10-01T16:52:03Z) - Continuous Decoding of Daily-Life Hand Movements from Forearm Muscle
Activity for Enhanced Myoelectric Control of Hand Prostheses [78.120734120667]
We introduce a novel method, based on a long short-term memory (LSTM) network, to continuously map forearm EMG activity onto hand kinematics.
Ours is the first reported work on the prediction of hand kinematics that uses this challenging dataset.
Our results suggest that the presented method is suitable for the generation of control signals for the independent and proportional actuation of the multiple DOFs of state-of-the-art hand prostheses.
arXiv Detail & Related papers (2021-04-29T00:11:32Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - Affective Movement Generation using Laban Effort and Shape and Hidden
Markov Models [6.181642248900806]
This paper presents an approach for automatic affective movement generation that makes use of two movement abstractions: 1) Laban movement analysis (LMA), and 2) hidden Markov modeling.
The LMA provides a systematic tool for an abstract representation of the kinematic and expressive characteristics of movements.
An HMM abstraction of the identified movements is obtained and used with the desired motion path to generate a novel movement that conveys the target emotion.
The efficacy of the proposed approach in generating movements with recognizable target emotions is assessed using a validated automatic recognition model and a user study.
arXiv Detail & Related papers (2020-06-10T21:24:26Z) - Effect of Analysis Window and Feature Selection on Classification of
Hand Movements Using EMG Signal [0.20999222360659603]
Recently, research on myoelectric control based on pattern recognition (PR) shows promising results with the aid of machine learning classifiers.
By offering multiple class movements and intuitive control, this method has the potential to power an amputated subject to perform everyday life movements.
We show that effective data preprocessing and optimum feature selection helps to improve the classification accuracy of hand movements.
arXiv Detail & Related papers (2020-02-02T19:03:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.