Reciprocal Learning of Intent Inferral with Augmented Visual Feedback for Stroke
- URL: http://arxiv.org/abs/2412.07956v1
- Date: Tue, 10 Dec 2024 22:49:36 GMT
- Title: Reciprocal Learning of Intent Inferral with Augmented Visual Feedback for Stroke
- Authors: Jingxi Xu, Ava Chen, Lauren Winterbottom, Joaquin Palacios, Preethika Chivukula, Dawn M. Nilsen, Joel Stein, Matei Ciocarlie,
- Abstract summary: We propose a bidirectional paradigm that facilitates human adaptation to an intent inferral classifier.
We demonstrate this paradigm in the context of controlling a robotic hand orthosis for stroke.
Our experiments with stroke subjects show reciprocal learning improving performance in a subset of subjects without negatively impacting performance on the others.
- Score: 2.303526979876375
- License:
- Abstract: Intent inferral, the process by which a robotic device predicts a user's intent from biosignals, offers an effective and intuitive way to control wearable robots. Classical intent inferral methods treat biosignal inputs as unidirectional ground truths for training machine learning models, where the internal state of the model is not directly observable by the user. In this work, we propose reciprocal learning, a bidirectional paradigm that facilitates human adaptation to an intent inferral classifier. Our paradigm consists of iterative, interwoven stages that alternate between updating machine learning models and guiding human adaptation with the use of augmented visual feedback. We demonstrate this paradigm in the context of controlling a robotic hand orthosis for stroke, where the device predicts open, close, and relax intents from electromyographic (EMG) signals and provides appropriate assistance. We use LED progress-bar displays to communicate to the user the predicted probabilities for open and close intents by the classifier. Our experiments with stroke subjects show reciprocal learning improving performance in a subset of subjects (two out of five) without negatively impacting performance on the others. We hypothesize that, during reciprocal learning, subjects can learn to reproduce more distinguishable muscle activation patterns and generate more separable biosignals.
Related papers
- Learning Manipulation by Predicting Interaction [85.57297574510507]
We propose a general pre-training pipeline that learns Manipulation by Predicting the Interaction.
The experimental results demonstrate that MPI exhibits remarkable improvement by 10% to 64% compared with previous state-of-the-art in real-world robot platforms.
arXiv Detail & Related papers (2024-06-01T13:28:31Z) - Enhancing Robot Learning through Learned Human-Attention Feature Maps [6.724036710994883]
We think that embedding auxiliary information about focus point into robot learning would enhance efficiency and robustness of the learning process.
In this paper, we propose a novel approach to model and emulate the human attention with an approximate prediction model.
We test our approach on two learning tasks - object detection and imitation learning.
arXiv Detail & Related papers (2023-08-29T14:23:44Z) - Visual Affordance Prediction for Guiding Robot Exploration [56.17795036091848]
We develop an approach for learning visual affordances for guiding robot exploration.
We use a Transformer-based model to learn a conditional distribution in the latent embedding space of a VQ-VAE.
We show how the trained affordance model can be used for guiding exploration by acting as a goal-sampling distribution, during visual goal-conditioned policy learning in robotic manipulation.
arXiv Detail & Related papers (2023-05-28T17:53:09Z) - Continually Learned Pavlovian Signalling Without Forgetting for
Human-in-the-Loop Robotic Control [0.8258451067861933]
Pavlovian signalling is an approach for better modulating feedback in prostheses.
One challenge is that they can forget previously learned predictions when a user begins to successfully act upon delivered feedback.
This work contributes new insight into the challenges of providing learned predictive feedback from a prosthetic device.
arXiv Detail & Related papers (2023-05-16T15:37:16Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - Relational Graph Learning on Visual and Kinematics Embeddings for
Accurate Gesture Recognition in Robotic Surgery [84.73764603474413]
We propose a novel online approach of multi-modal graph network (i.e., MRG-Net) to dynamically integrate visual and kinematics information.
The effectiveness of our method is demonstrated with state-of-the-art results on the public JIGSAWS dataset.
arXiv Detail & Related papers (2020-11-03T11:00:10Z) - Predictive Modeling of Periodic Behavior for Human-Robot Symbiotic
Walking [13.68799310875662]
We extend Interaction Primitives to periodic movement regimes, i.e., walking.
We show that this model is particularly well-suited for learning data-driven, customized models of human walking.
We also demonstrate how the same framework can be used to learn controllers for a robotic prosthesis.
arXiv Detail & Related papers (2020-05-27T03:30:48Z) - Learning a generative model for robot control using visual feedback [7.171234436165255]
We introduce a novel formulation for incorporating visual feedback in controlling robots.
Inference in the model allows us to infer the robot state corresponding to target locations of the features.
We demonstrate the effectiveness of our method by executing grasping and tight-fit insertions on robots with inaccurate controllers.
arXiv Detail & Related papers (2020-03-10T00:34:01Z) - On the interaction between supervision and self-play in emergent
communication [82.290338507106]
We investigate the relationship between two categories of learning signals with the ultimate goal of improving sample efficiency.
We find that first training agents via supervised learning on human data followed by self-play outperforms the converse.
arXiv Detail & Related papers (2020-02-04T02:35:19Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.