Deep Learning of Movement Intent and Reaction Time for EEG-informed
Adaptation of Rehabilitation Robots
- URL: http://arxiv.org/abs/2002.08354v1
- Date: Tue, 18 Feb 2020 13:20:46 GMT
- Title: Deep Learning of Movement Intent and Reaction Time for EEG-informed
Adaptation of Rehabilitation Robots
- Authors: Neelesh Kumar and Konstantinos P. Michmizos
- Abstract summary: adaptation is a crucial mechanism for rehabilitation robots in promoting motor learning.
We propose a deep convolutional neural network (CNN) that uses electroencephalography (EEG) as an objective measurement of two kinematics components.
Our results demonstrate how individual movement components implicated in distinct types of motor learning can be predicted from synchronized EEG data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mounting evidence suggests that adaptation is a crucial mechanism for
rehabilitation robots in promoting motor learning. Yet, it is commonly based on
robot-derived movement kinematics, which is a rather subjective measurement of
performance, especially in the presence of a sensorimotor impairment. Here, we
propose a deep convolutional neural network (CNN) that uses
electroencephalography (EEG) as an objective measurement of two kinematics
components that are typically used to assess motor learning and thereby
adaptation: i) the intent to initiate a goal-directed movement, and ii) the
reaction time (RT) of that movement. We evaluated our CNN on data acquired from
an in-house experiment where 13 subjects moved a rehabilitation robotic arm in
four directions on a plane, in response to visual stimuli. Our CNN achieved
average test accuracies of 80.08% and 79.82% in a binary classification of the
intent (intent vs. no intent) and RT (slow vs. fast), respectively. Our results
demonstrate how individual movement components implicated in distinct types of
motor learning can be predicted from synchronized EEG data acquired before the
start of the movement. Our approach can, therefore, inform robotic adaptation
in real-time and has the potential to further improve one's ability to perform
the rehabilitation task.
Related papers
- Modelling Human Visual Motion Processing with Trainable Motion Energy
Sensing and a Self-attention Network [1.9458156037869137]
We propose an image-computable model of human motion perception by bridging the gap between biological and computer vision models.
This model architecture aims to capture the computations in V1-MT, the core structure for motion perception in the biological visual system.
In silico neurophysiology reveals that our model's unit responses are similar to mammalian neural recordings regarding motion pooling and speed tuning.
arXiv Detail & Related papers (2023-05-16T04:16:07Z) - Deep learning-based approaches for human motion decoding in smart
walkers for rehabilitation [3.8791511769387634]
Smart walkers should be able to decode human motion and needs, as early as possible.
Current walkers decode motion intention using information of wearable or embedded sensors.
A contactless approach is proposed, addressing human motion decoding as an early action recognition/detection problematic.
arXiv Detail & Related papers (2023-01-13T14:29:44Z) - A Neural Active Inference Model of Perceptual-Motor Learning [62.39667564455059]
The active inference framework (AIF) is a promising new computational framework grounded in contemporary neuroscience.
In this study, we test the ability for the AIF to capture the role of anticipation in the visual guidance of action in humans.
We present a novel formulation of the prior function that maps a multi-dimensional world-state to a uni-dimensional distribution of free-energy.
arXiv Detail & Related papers (2022-11-16T20:00:38Z) - Motor imagery classification using EEG spectrograms [9.05607520128194]
limb movement imagination (MI) could be significant for a brain-computer interface (BCI) system.
Using MI detection through electroencephalography (EEG), we can recognize the imagination of movement in a user.
In this paper, we utilize pre-trained deep learning (DL) algorithms for the classification of imagined upper limb movements.
arXiv Detail & Related papers (2022-11-15T17:57:17Z) - Skeleton2Humanoid: Animating Simulated Characters for
Physically-plausible Motion In-betweening [59.88594294676711]
Modern deep learning based motion synthesis approaches barely consider the physical plausibility of synthesized motions.
We propose a system Skeleton2Humanoid'' which performs physics-oriented motion correction at test time.
Experiments on the challenging LaFAN1 dataset show our system can outperform prior methods significantly in terms of both physical plausibility and accuracy.
arXiv Detail & Related papers (2022-10-09T16:15:34Z) - From Motion to Muscle [0.0]
We show that muscle activity can be artificially generated based on motion features such as position, velocity, and acceleration.
The model achieves remarkable precision for previously trained movements and maintains significantly high precision for new movements that have not been previously trained.
arXiv Detail & Related papers (2022-01-27T13:30:17Z) - Measuring and modeling the motor system with machine learning [117.44028458220427]
The utility of machine learning in understanding the motor system is promising a revolution in how to collect, measure, and analyze data.
We discuss the growing use of machine learning: from pose estimation, kinematic analyses, dimensionality reduction, and closed-loop feedback, to its use in understanding neural correlates and untangling sensorimotor systems.
arXiv Detail & Related papers (2021-03-22T12:42:16Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - Online Body Schema Adaptation through Cost-Sensitive Active Learning [63.84207660737483]
The work was implemented in a simulation environment, using the 7DoF arm of the iCub robot simulator.
A cost-sensitive active learning approach is used to select optimal joint configurations.
The results show cost-sensitive active learning has similar accuracy to the standard active learning approach, while reducing in about half the executed movement.
arXiv Detail & Related papers (2021-01-26T16:01:02Z) - Deep learning-based classification of fine hand movements from low
frequency EEG [5.414308305392762]
The classification of different fine hand movements from EEG signals represents a relevant research challenge.
We trained and tested a newly proposed convolutional neural network (CNN)
CNN achieved good performance in both datasets and they were similar or superior to the baseline models.
arXiv Detail & Related papers (2020-11-13T07:16:06Z) - Motion Pyramid Networks for Accurate and Efficient Cardiac Motion
Estimation [51.72616167073565]
We propose Motion Pyramid Networks, a novel deep learning-based approach for accurate and efficient cardiac motion estimation.
We predict and fuse a pyramid of motion fields from multiple scales of feature representations to generate a more refined motion field.
We then use a novel cyclic teacher-student training strategy to make the inference end-to-end and further improve the tracking performance.
arXiv Detail & Related papers (2020-06-28T21:03:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.