User Training with Error Augmentation for Electromyogram-based Gesture Classification
- URL: http://arxiv.org/abs/2309.07289v3
- Date: Fri, 22 Mar 2024 21:11:15 GMT
- Title: User Training with Error Augmentation for Electromyogram-based Gesture Classification
- Authors: Yunus Bicer, Niklas Smedemark-Margulies, Basak Celik, Elifnur Sunger, Ryan Orendorff, Stephanie Naufel, Tales Imbiriba, Deniz Erdoğmuş, Eugene Tunik, Mathew Yarossi,
- Abstract summary: We designed and tested a system for real-time control of a user interface by extracting surface electromyographic (sEMG) activity from eight electrodes in a wrist-band configuration.
sEMG data were streamed into a machine-learning algorithm that classified hand gestures in real-time.
- Score: 4.203816772270161
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We designed and tested a system for real-time control of a user interface by extracting surface electromyographic (sEMG) activity from eight electrodes in a wrist-band configuration. sEMG data were streamed into a machine-learning algorithm that classified hand gestures in real-time. After an initial model calibration, participants were presented with one of three types of feedback during a human-learning stage: veridical feedback, in which predicted probabilities from the gesture classification algorithm were displayed without alteration, modified feedback, in which we applied a hidden augmentation of error to these probabilities, and no feedback. User performance was then evaluated in a series of minigames, in which subjects were required to use eight gestures to manipulate their game avatar to complete a task. Experimental results indicated that, relative to baseline, the modified feedback condition led to significantly improved accuracy and improved gesture class separation. These findings suggest that real-time feedback in a gamified user interface with manipulation of feedback may enable intuitive, rapid, and accurate task acquisition for sEMG-based gesture recognition applications.
Related papers
- Reciprocal Learning of Intent Inferral with Augmented Visual Feedback for Stroke [2.303526979876375]
We propose a bidirectional paradigm that facilitates human adaptation to an intent inferral classifier.
We demonstrate this paradigm in the context of controlling a robotic hand orthosis for stroke.
Our experiments with stroke subjects show reciprocal learning improving performance in a subset of subjects without negatively impacting performance on the others.
arXiv Detail & Related papers (2024-12-10T22:49:36Z) - Multi-Modal Self-Supervised Learning for Surgical Feedback Effectiveness Assessment [66.6041949490137]
We propose a method that integrates information from transcribed verbal feedback and corresponding surgical video to predict feedback effectiveness.
Our findings show that both transcribed feedback and surgical video are individually predictive of trainee behavior changes.
Our results demonstrate the potential of multi-modal learning to advance the automated assessment of surgical feedback.
arXiv Detail & Related papers (2024-11-17T00:13:00Z) - Wearable Sensor-Based Few-Shot Continual Learning on Hand Gestures for Motor-Impaired Individuals via Latent Embedding Exploitation [6.782362178252351]
We introduce the Latent Embedding Exploitation (LEE) mechanism in our replay-based Few-Shot Continual Learning framework.
Our method produces a diversified latent feature space by leveraging a preserved latent embedding known as gesture prior knowledge.
Our method helps motor-impaired persons leverage wearable devices, and their unique styles of movement can be learned and applied.
arXiv Detail & Related papers (2024-05-14T21:20:27Z) - A Deep Learning Sequential Decoder for Transient High-Density
Electromyography in Hand Gesture Recognition Using Subject-Embedded Transfer
Learning [11.170031300110315]
Hand gesture recognition (HGR) has gained significant attention due to the increasing use of AI-powered human-computers.
These interfaces have a range of applications, including the control of extended reality, agile prosthetics, and exoskeletons.
These interfaces have a range of applications, including the control of extended reality, agile prosthetics, and exoskeletons.
arXiv Detail & Related papers (2023-09-23T05:32:33Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - Reconfigurable Data Glove for Reconstructing Physical and Virtual Grasps [100.72245315180433]
We present a reconfigurable data glove design to capture different modes of human hand-object interactions.
The glove operates in three modes for various downstream tasks with distinct features.
We evaluate the system's three modes by (i) recording hand gestures and associated forces, (ii) improving manipulation fluency in VR, and (iii) producing realistic simulation effects of various tool uses.
arXiv Detail & Related papers (2023-01-14T05:35:50Z) - Adaptive Local-Component-aware Graph Convolutional Network for One-shot
Skeleton-based Action Recognition [54.23513799338309]
We present an Adaptive Local-Component-aware Graph Convolutional Network for skeleton-based action recognition.
Our method provides a stronger representation than the global embedding and helps our model reach state-of-the-art.
arXiv Detail & Related papers (2022-09-21T02:33:07Z) - Action-Specific Perception & Performance on a Fitts's Law Task in
Virtual Reality: The Role of Haptic Feedback [8.993666948179644]
Action-Specific Perception (ASP) theory postulates that the performance of an individual on a task modulates this individual's spatial & time perception pertinent to the task's components & procedures.
This paper examines the association between performance & perception & the potential effects that tactile feedback modalities could generate.
arXiv Detail & Related papers (2022-07-15T11:07:15Z) - Teaching Robots to Grasp Like Humans: An Interactive Approach [3.3836709236378746]
This work investigates how the intricate task of grasping may be learned from humans based on demonstrations and corrections.
Rather than training a person to provide better demonstrations, non-expert users are provided with the ability to interactively modify the dynamics of their initial demonstration.
arXiv Detail & Related papers (2021-10-09T10:27:50Z) - Assisted Perception: Optimizing Observations to Communicate State [112.40598205054994]
We aim to help users estimate the state of the world in tasks like robotic teleoperation and navigation with visual impairments.
We synthesize new observations that lead to more accurate internal state estimates when processed by the user.
arXiv Detail & Related papers (2020-08-06T19:08:05Z) - Effect of Analysis Window and Feature Selection on Classification of
Hand Movements Using EMG Signal [0.20999222360659603]
Recently, research on myoelectric control based on pattern recognition (PR) shows promising results with the aid of machine learning classifiers.
By offering multiple class movements and intuitive control, this method has the potential to power an amputated subject to perform everyday life movements.
We show that effective data preprocessing and optimum feature selection helps to improve the classification accuracy of hand movements.
arXiv Detail & Related papers (2020-02-02T19:03:23Z) - Facial Feedback for Reinforcement Learning: A Case Study and Offline
Analysis Using the TAMER Framework [51.237191651923666]
We investigate the potential of agent learning from trainers' facial expressions via interpreting them as evaluative feedback.
With designed CNN-RNN model, our analysis shows that telling trainers to use facial expressions and competition can improve the accuracies for estimating positive and negative feedback.
Our results with a simulation experiment show that learning solely from predicted feedback based on facial expressions is possible.
arXiv Detail & Related papers (2020-01-23T17:50:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.