Towards AI-controlled FES-restoration of arm movements: Controlling for
progressive muscular fatigue with Gaussian state-space models
- URL: http://arxiv.org/abs/2301.04005v1
- Date: Tue, 10 Jan 2023 14:51:55 GMT
- Title: Towards AI-controlled FES-restoration of arm movements: Controlling for
progressive muscular fatigue with Gaussian state-space models
- Authors: Nat Wannawas and A.Aldo Faisal
- Abstract summary: Reinforcement Learning (RL) emerges as a promising approach to govern customised control rules for different settings.
Yet, one remaining challenge of controlling FES systems for RL is unobservable muscle fatigue.
We present a method to address the unobservable muscle fatigue issue, allowing our RL controller to achieve higher control performances.
- Score: 6.320141734801679
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reaching disability limits an individual's ability in performing daily tasks.
Surface Functional Electrical Stimulation (FES) offers a non-invasive solution
to restore lost ability. However, inducing desired movements using FES is still
an open engineering problem. This problem is accentuated by the complexities of
human arms' neuromechanics and the variations across individuals. Reinforcement
Learning (RL) emerges as a promising approach to govern customised control
rules for different settings. Yet, one remaining challenge of controlling FES
systems for RL is unobservable muscle fatigue that progressively changes as an
unknown function of the stimulation, thereby breaking the Markovian assumption
of RL. In this work, we present a method to address the unobservable muscle
fatigue issue, allowing our RL controller to achieve higher control
performances. Our method is based on a Gaussian State-Space Model (GSSM) that
utilizes recurrent neural networks to learn Markovian state-spaces from partial
observations. The GSSM is used as a filter that converts the observations into
the state-space representation for RL to preserve the Markovian assumption.
Here, we start with presenting the modification of the original GSSM to address
an overconfident issue. We then present the interaction between RL and the
modified GSSM, followed by the setup for FES control learning. We test our
RL-GSSM system on a planar reaching setting in simulation using a detailed
neuromechanical model. The results show that the GSSM can help improve the RL's
control performance to the comparable level of the ideal case that the fatigue
is observable.
Related papers
- Symmetric Reinforcement Learning Loss for Robust Learning on Diverse Tasks and Model Scales [13.818149654692863]
Reinforcement learning (RL) training is inherently unstable due to factors such as moving targets and high gradient variance.
In this work, we improve the stability of RL training by adapting the reverse cross entropy (RCE) from supervised learning for noisy data to define a symmetric RL loss.
arXiv Detail & Related papers (2024-05-27T19:28:33Z) - Direct Preference Optimization: Your Language Model is Secretly a Reward Model [119.65409513119963]
We introduce a new parameterization of the reward model in RLHF that enables extraction of the corresponding optimal policy in closed form.
The resulting algorithm, which we call Direct Preference Optimization (DPO), is stable, performant, and computationally lightweight.
Our experiments show that DPO can fine-tune LMs to align with human preferences as well as or better than existing methods.
arXiv Detail & Related papers (2023-05-29T17:57:46Z) - Towards AI-controlled FES-restoration of arm movements:
neuromechanics-based reinforcement learning for 3-D reaching [6.320141734801679]
Functional Electrical Stimulation (FES) can restore lost motor functions.
Neuromechanical models are valuable tools for developing FES control methods.
We present our approach toward FES-based restoration of arm movements.
arXiv Detail & Related papers (2023-01-10T14:50:37Z) - Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels [112.63440666617494]
Reinforcement learning algorithms can succeed but require large amounts of interactions between the agent and the environment.
We propose a new method to solve it, using unsupervised model-based RL, for pre-training the agent.
We show robust performance on the Real-Word RL benchmark, hinting at resiliency to environment perturbations during adaptation.
arXiv Detail & Related papers (2022-09-24T14:22:29Z) - Neuromuscular Reinforcement Learning to Actuate Human Limbs through FES [7.4769019455423855]
Functional Electrical Stimulation (FES) is a technique to evoke muscle contraction through low-energy electrical signals.
Yet, an open challenge remains on how to apply FES to achieve desired movements.
Here, we learn to control FES through Reinforcement Learning (RL) which can automatically customise the stimulation for the patients.
arXiv Detail & Related papers (2022-09-16T10:41:01Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Neural optimal feedback control with local learning rules [67.5926699124528]
A major problem in motor control is understanding how the brain plans and executes proper movements in the face of delayed and noisy stimuli.
We introduce a novel online algorithm which combines adaptive Kalman filtering with a model free control approach.
arXiv Detail & Related papers (2021-11-12T20:02:00Z) - MotionHint: Self-Supervised Monocular Visual Odometry with Motion
Constraints [70.76761166614511]
We present a novel self-supervised algorithm named MotionHint for monocular visual odometry (VO)
Our MotionHint algorithm can be easily applied to existing open-sourced state-of-the-art SSM-VO systems.
arXiv Detail & Related papers (2021-09-14T15:35:08Z) - Residual Reinforcement Learning from Demonstrations [51.56457466788513]
Residual reinforcement learning (RL) has been proposed as a way to solve challenging robotic tasks by adapting control actions from a conventional feedback controller to maximize a reward signal.
We extend the residual formulation to learn from visual inputs and sparse rewards using demonstrations.
Our experimental evaluation on simulated manipulation tasks on a 6-DoF UR5 arm and a 28-DoF dexterous hand demonstrates that residual RL from demonstrations is able to generalize to unseen environment conditions more flexibly than either behavioral cloning or RL fine-tuning.
arXiv Detail & Related papers (2021-06-15T11:16:49Z) - Reinforcement Learning of Musculoskeletal Control from Functional
Simulations [3.94716580540538]
In this work, a deep reinforcement learning (DRL) based inverse dynamics controller is trained to control muscle activations of a biomechanical model of the human shoulder.
Results are presented for a single-axis motion control of shoulder abduction for the task of following randomly generated angular trajectories.
arXiv Detail & Related papers (2020-07-13T20:20:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.