Neuromuscular Reinforcement Learning to Actuate Human Limbs through FES
- URL: http://arxiv.org/abs/2209.07849v1
- Date: Fri, 16 Sep 2022 10:41:01 GMT
- Title: Neuromuscular Reinforcement Learning to Actuate Human Limbs through FES
- Authors: Nat Wannawas, Ali Shafti, A.Aldo Faisal
- Abstract summary: Functional Electrical Stimulation (FES) is a technique to evoke muscle contraction through low-energy electrical signals.
Yet, an open challenge remains on how to apply FES to achieve desired movements.
Here, we learn to control FES through Reinforcement Learning (RL) which can automatically customise the stimulation for the patients.
- Score: 7.4769019455423855
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Functional Electrical Stimulation (FES) is a technique to evoke muscle
contraction through low-energy electrical signals. FES can animate paralysed
limbs. Yet, an open challenge remains on how to apply FES to achieve desired
movements. This challenge is accentuated by the complexities of human bodies
and the non-stationarities of the muscles' responses. The former causes
difficulties in performing inverse dynamics, and the latter causes control
performance to degrade over extended periods of use. Here, we engage the
challenge via a data-driven approach. Specifically, we learn to control FES
through Reinforcement Learning (RL) which can automatically customise the
stimulation for the patients. However, RL typically has Markovian assumptions
while FES control systems are non-Markovian because of the non-stationarities.
To deal with this problem, we use a recurrent neural network to create
Markovian state representations. We cast FES controls into RL problems and
train RL agents to control FES in different settings in both simulations and
the real world. The results show that our RL controllers can maintain control
performances over long periods and have better stimulation characteristics than
PID controllers.
Related papers
- SuperPADL: Scaling Language-Directed Physics-Based Control with Progressive Supervised Distillation [55.47473138423572]
We introduce SuperPADL, a scalable framework for physics-based text-to-motion.
SuperPADL trains controllers on thousands of diverse motion clips using RL and supervised learning.
Our controller is trained on a dataset containing over 5000 skills and runs in real time on a consumer GPU.
arXiv Detail & Related papers (2024-07-15T07:07:11Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Towards AI-controlled FES-restoration of arm movements: Controlling for
progressive muscular fatigue with Gaussian state-space models [6.320141734801679]
Reinforcement Learning (RL) emerges as a promising approach to govern customised control rules for different settings.
Yet, one remaining challenge of controlling FES systems for RL is unobservable muscle fatigue.
We present a method to address the unobservable muscle fatigue issue, allowing our RL controller to achieve higher control performances.
arXiv Detail & Related papers (2023-01-10T14:51:55Z) - Towards AI-controlled FES-restoration of arm movements:
neuromechanics-based reinforcement learning for 3-D reaching [6.320141734801679]
Functional Electrical Stimulation (FES) can restore lost motor functions.
Neuromechanical models are valuable tools for developing FES control methods.
We present our approach toward FES-based restoration of arm movements.
arXiv Detail & Related papers (2023-01-10T14:50:37Z) - Skip Training for Multi-Agent Reinforcement Learning Controller for
Industrial Wave Energy Converters [94.84709449845352]
Recent Wave Energy Converters (WEC) are equipped with multiple legs and generators to maximize energy generation.
Traditional controllers have shown limitations to capture complex wave patterns and the controllers must efficiently maximize the energy capture.
This paper introduces a Multi-Agent Reinforcement Learning controller (MARL), which outperforms the traditionally used spring damper controller.
arXiv Detail & Related papers (2022-09-13T00:20:31Z) - Machine Learning for Mechanical Ventilation Control (Extended Abstract) [52.65490904484772]
Mechanical ventilation is one of the most widely used therapies in the ICU.
We frame these as a control problem: ventilators must let air in and out of the patient's lungs according to a prescribed trajectory of airway pressure.
Our data-driven approach learns to control an invasive ventilator by training on a simulator itself trained on data collected from the ventilator.
This method outperforms popular reinforcement learning algorithms and even controls the physical ventilator more accurately and robustly than PID.
arXiv Detail & Related papers (2021-11-19T20:54:41Z) - Residual Reinforcement Learning from Demonstrations [51.56457466788513]
Residual reinforcement learning (RL) has been proposed as a way to solve challenging robotic tasks by adapting control actions from a conventional feedback controller to maximize a reward signal.
We extend the residual formulation to learn from visual inputs and sparse rewards using demonstrations.
Our experimental evaluation on simulated manipulation tasks on a 6-DoF UR5 arm and a 28-DoF dexterous hand demonstrates that residual RL from demonstrations is able to generalize to unseen environment conditions more flexibly than either behavioral cloning or RL fine-tuning.
arXiv Detail & Related papers (2021-06-15T11:16:49Z) - Continuous Decoding of Daily-Life Hand Movements from Forearm Muscle
Activity for Enhanced Myoelectric Control of Hand Prostheses [78.120734120667]
We introduce a novel method, based on a long short-term memory (LSTM) network, to continuously map forearm EMG activity onto hand kinematics.
Ours is the first reported work on the prediction of hand kinematics that uses this challenging dataset.
Our results suggest that the presented method is suitable for the generation of control signals for the independent and proportional actuation of the multiple DOFs of state-of-the-art hand prostheses.
arXiv Detail & Related papers (2021-04-29T00:11:32Z) - I am Robot: Neuromuscular Reinforcement Learning to Actuate Human Limbs
through Functional Electrical Stimulation [5.066245628617513]
Functional Electrical Stimulation (FES) is an established technique for contracting muscles by stimulating the skin above a muscle to induce its contraction.
We present our Deep Reinforcement Learning approach to the control of human muscles with FES, using a recurrent neural network for dynamic state representation.
Our results show that our controller can learn to manipulate human muscles, applying appropriate levels of stimulation to achieve the given tasks while compensating for advancing muscle fatigue which arises throughout the tasks.
arXiv Detail & Related papers (2021-03-09T10:58:51Z) - Neuromechanics-based Deep Reinforcement Learning of Neurostimulation
Control in FES cycling [1.933681537640272]
Functional Electrical Stimulation (FES) can restore motion to a paralysed person's muscles.
Current neurostimulation engineering still relies on 20th Century control approaches.
Deep Reinforcement Learning (RL) developed for real time adaptive neurostimulation of paralysed legs for FES cycling.
arXiv Detail & Related papers (2021-03-04T14:33:18Z) - Reinforcement Learning of Musculoskeletal Control from Functional
Simulations [3.94716580540538]
In this work, a deep reinforcement learning (DRL) based inverse dynamics controller is trained to control muscle activations of a biomechanical model of the human shoulder.
Results are presented for a single-axis motion control of shoulder abduction for the task of following randomly generated angular trajectories.
arXiv Detail & Related papers (2020-07-13T20:20:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.