Neuromechanics-based Deep Reinforcement Learning of Neurostimulation
Control in FES cycling
- URL: http://arxiv.org/abs/2103.03057v1
- Date: Thu, 4 Mar 2021 14:33:18 GMT
- Title: Neuromechanics-based Deep Reinforcement Learning of Neurostimulation
Control in FES cycling
- Authors: Nat Wannawas, Mahendran Subramanian, A. Aldo Faisal
- Abstract summary: Functional Electrical Stimulation (FES) can restore motion to a paralysed person's muscles.
Current neurostimulation engineering still relies on 20th Century control approaches.
Deep Reinforcement Learning (RL) developed for real time adaptive neurostimulation of paralysed legs for FES cycling.
- Score: 1.933681537640272
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Functional Electrical Stimulation (FES) can restore motion to a paralysed
person's muscles. Yet, control stimulating many muscles to restore the
practical function of entire limbs is an unsolved problem. Current
neurostimulation engineering still relies on 20th Century control approaches
and correspondingly shows only modest results that require daily tinkering to
operate at all. Here, we present our state of the art Deep Reinforcement
Learning (RL) developed for real time adaptive neurostimulation of paralysed
legs for FES cycling. Core to our approach is the integration of a personalised
neuromechanical component into our reinforcement learning framework that allows
us to train the model efficiently without demanding extended training sessions
with the patient and working out of the box. Our neuromechanical component
includes merges musculoskeletal models of muscle and or tendon function and a
multistate model of muscle fatigue, to render the neurostimulation responsive
to a paraplegic's cyclist instantaneous muscle capacity. Our RL approach
outperforms PID and Fuzzy Logic controllers in accuracy and performance.
Crucially, our system learned to stimulate a cyclist's legs from ramping up
speed at the start to maintaining a high cadence in steady state racing as the
muscles fatigue. A part of our RL neurostimulation system has been successfully
deployed at the Cybathlon 2020 bionic Olympics in the FES discipline with our
paraplegic cyclist winning the Silver medal among 9 competing teams.
Related papers
- Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Natural and Robust Walking using Reinforcement Learning without
Demonstrations in High-Dimensional Musculoskeletal Models [29.592874007260342]
Humans excel at robust bipedal walking in complex natural environments.
It is still not fully understood how the nervous system resolves the musculoskeletal redundancy to solve the multi-objective control problem.
arXiv Detail & Related papers (2023-09-06T13:20:31Z) - Neuromuscular Reinforcement Learning to Actuate Human Limbs through FES [7.4769019455423855]
Functional Electrical Stimulation (FES) is a technique to evoke muscle contraction through low-energy electrical signals.
Yet, an open challenge remains on how to apply FES to achieve desired movements.
Here, we learn to control FES through Reinforcement Learning (RL) which can automatically customise the stimulation for the patients.
arXiv Detail & Related papers (2022-09-16T10:41:01Z) - Continual learning benefits from multiple sleep mechanisms: NREM, REM,
and Synaptic Downscaling [51.316408685035526]
Learning new tasks and skills in succession without losing prior learning is a computational challenge for both artificial and biological neural networks.
Here, we investigate how modeling three distinct components of mammalian sleep together affects continual learning in artificial neural networks.
arXiv Detail & Related papers (2022-09-09T13:45:27Z) - MyoSuite -- A contact-rich simulation suite for musculoskeletal motor
control [7.856809409051587]
MyoSuite is a suite of physiologically accurate biomechanical models of elbow, wrist, and hand, with physical contact capabilities.
We provide diverse motor-control challenges: from simple postural control to skilled hand-object interactions.
arXiv Detail & Related papers (2022-05-26T20:11:23Z) - An Adiabatic Capacitive Artificial Neuron with RRAM-based Threshold
Detection for Energy-Efficient Neuromorphic Computing [62.997667081978825]
We present an artificial neuron featuring adiabatic synapse capacitors to produce membrane potentials for the somas of neurons.
Our initial 4-bit adiabatic capacitive neuron proof-of-concept example shows 90% synaptic energy saving.
arXiv Detail & Related papers (2022-02-02T17:12:22Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - I am Robot: Neuromuscular Reinforcement Learning to Actuate Human Limbs
through Functional Electrical Stimulation [5.066245628617513]
Functional Electrical Stimulation (FES) is an established technique for contracting muscles by stimulating the skin above a muscle to induce its contraction.
We present our Deep Reinforcement Learning approach to the control of human muscles with FES, using a recurrent neural network for dynamic state representation.
Our results show that our controller can learn to manipulate human muscles, applying appropriate levels of stimulation to achieve the given tasks while compensating for advancing muscle fatigue which arises throughout the tasks.
arXiv Detail & Related papers (2021-03-09T10:58:51Z) - UniCon: Universal Neural Controller For Physics-based Character Motion [70.45421551688332]
We propose a physics-based universal neural controller (UniCon) that learns to master thousands of motions with different styles by learning on large-scale motion datasets.
UniCon can support keyboard-driven control, compose motion sequences drawn from a large pool of locomotion and acrobatics skills and teleport a person captured on video to a physics-based virtual avatar.
arXiv Detail & Related papers (2020-11-30T18:51:16Z) - Reinforcement Learning of Musculoskeletal Control from Functional
Simulations [3.94716580540538]
In this work, a deep reinforcement learning (DRL) based inverse dynamics controller is trained to control muscle activations of a biomechanical model of the human shoulder.
Results are presented for a single-axis motion control of shoulder abduction for the task of following randomly generated angular trajectories.
arXiv Detail & Related papers (2020-07-13T20:20:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.