I am Robot: Neuromuscular Reinforcement Learning to Actuate Human Limbs
through Functional Electrical Stimulation
- URL: http://arxiv.org/abs/2103.05349v1
- Date: Tue, 9 Mar 2021 10:58:51 GMT
- Title: I am Robot: Neuromuscular Reinforcement Learning to Actuate Human Limbs
through Functional Electrical Stimulation
- Authors: Nat Wannawas, Ali Shafti, A. Aldo Faisal
- Abstract summary: Functional Electrical Stimulation (FES) is an established technique for contracting muscles by stimulating the skin above a muscle to induce its contraction.
We present our Deep Reinforcement Learning approach to the control of human muscles with FES, using a recurrent neural network for dynamic state representation.
Our results show that our controller can learn to manipulate human muscles, applying appropriate levels of stimulation to achieve the given tasks while compensating for advancing muscle fatigue which arises throughout the tasks.
- Score: 5.066245628617513
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human movement disorders or paralysis lead to the loss of control of muscle
activation and thus motor control. Functional Electrical Stimulation (FES) is
an established and safe technique for contracting muscles by stimulating the
skin above a muscle to induce its contraction. However, an open challenge
remains on how to restore motor abilities to human limbs through FES, as the
problem of controlling the stimulation is unclear. We are taking a robotics
perspective on this problem, by developing robot learning algorithms that
control the ultimate humanoid robot, the human body, through electrical muscle
stimulation. Human muscles are not trivial to control as actuators due to their
force production being non-stationary as a result of fatigue and other internal
state changes, in contrast to robot actuators which are well-understood and
stationary over broad operation ranges. We present our Deep Reinforcement
Learning approach to the control of human muscles with FES, using a recurrent
neural network for dynamic state representation, to overcome the unobserved
elements of the behaviour of human muscles under external stimulation. We
demonstrate our technique both in neuromuscular simulations but also
experimentally on a human. Our results show that our controller can learn to
manipulate human muscles, applying appropriate levels of stimulation to achieve
the given tasks while compensating for advancing muscle fatigue which arises
throughout the tasks. Additionally, our technique can learn quickly enough to
be implemented in real-world human-in-the-loop settings.
Related papers
- Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Reinforcement Learning-Based Bionic Reflex Control for Anthropomorphic
Robotic Grasping exploiting Domain Randomization [0.4999814847776098]
We introduce an innovative bionic reflex control pipeline, leveraging reinforcement learning (RL)
Our proposed bionic reflex controller has been designed and tested on an anthropomorphic hand.
We anticipate that this autonomous, RL-based bionic reflex controller will catalyze the development of dependable and highly efficient robotic and prosthetic hands.
arXiv Detail & Related papers (2023-12-08T13:04:41Z) - Natural and Robust Walking using Reinforcement Learning without
Demonstrations in High-Dimensional Musculoskeletal Models [29.592874007260342]
Humans excel at robust bipedal walking in complex natural environments.
It is still not fully understood how the nervous system resolves the musculoskeletal redundancy to solve the multi-objective control problem.
arXiv Detail & Related papers (2023-09-06T13:20:31Z) - Learning Human-to-Robot Handovers from Point Clouds [63.18127198174958]
We propose the first framework to learn control policies for vision-based human-to-robot handovers.
We show significant performance gains over baselines on a simulation benchmark, sim-to-sim transfer and sim-to-real transfer.
arXiv Detail & Related papers (2023-03-30T17:58:36Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - MyoSuite -- A contact-rich simulation suite for musculoskeletal motor
control [7.856809409051587]
MyoSuite is a suite of physiologically accurate biomechanical models of elbow, wrist, and hand, with physical contact capabilities.
We provide diverse motor-control challenges: from simple postural control to skilled hand-object interactions.
arXiv Detail & Related papers (2022-05-26T20:11:23Z) - From Movement Kinematics to Object Properties: Online Recognition of
Human Carefulness [112.28757246103099]
We show how a robot can infer online, from vision alone, whether or not the human partner is careful when moving an object.
We demonstrated that a humanoid robot could perform this inference with high accuracy (up to 81.3%) even with a low-resolution camera.
The prompt recognition of movement carefulness from observing the partner's action will allow robots to adapt their actions on the object to show the same degree of care as their human partners.
arXiv Detail & Related papers (2021-09-01T16:03:13Z) - Neuromechanics-based Deep Reinforcement Learning of Neurostimulation
Control in FES cycling [1.933681537640272]
Functional Electrical Stimulation (FES) can restore motion to a paralysed person's muscles.
Current neurostimulation engineering still relies on 20th Century control approaches.
Deep Reinforcement Learning (RL) developed for real time adaptive neurostimulation of paralysed legs for FES cycling.
arXiv Detail & Related papers (2021-03-04T14:33:18Z) - Careful with That! Observation of Human Movements to Estimate Objects
Properties [106.925705883949]
We focus on the features of human motor actions that communicate insights on the weight of an object.
Our final goal is to enable a robot to autonomously infer the degree of care required in object handling.
arXiv Detail & Related papers (2021-03-02T08:14:56Z) - Reinforcement Learning of Musculoskeletal Control from Functional
Simulations [3.94716580540538]
In this work, a deep reinforcement learning (DRL) based inverse dynamics controller is trained to control muscle activations of a biomechanical model of the human shoulder.
Results are presented for a single-axis motion control of shoulder abduction for the task of following randomly generated angular trajectories.
arXiv Detail & Related papers (2020-07-13T20:20:01Z) - Learning Agile Robotic Locomotion Skills by Imitating Animals [72.36395376558984]
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics.
We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.
arXiv Detail & Related papers (2020-04-02T02:56:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.