Towards AI-controlled FES-restoration of arm movements:
neuromechanics-based reinforcement learning for 3-D reaching
- URL: http://arxiv.org/abs/2301.04004v1
- Date: Tue, 10 Jan 2023 14:50:37 GMT
- Title: Towards AI-controlled FES-restoration of arm movements:
neuromechanics-based reinforcement learning for 3-D reaching
- Authors: Nat Wannawas and A.Aldo Faisal
- Abstract summary: Functional Electrical Stimulation (FES) can restore lost motor functions.
Neuromechanical models are valuable tools for developing FES control methods.
We present our approach toward FES-based restoration of arm movements.
- Score: 6.320141734801679
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reaching disabilities affect the quality of life. Functional Electrical
Stimulation (FES) can restore lost motor functions. Yet, there remain
challenges in controlling FES to induce desired movements. Neuromechanical
models are valuable tools for developing FES control methods. However, focusing
on the upper extremity areas, several existing models are either overly
simplified or too computationally demanding for control purposes. Besides the
model-related issues, finding a general method for governing the control rules
for different tasks and subjects remains an engineering challenge. Here, we
present our approach toward FES-based restoration of arm movements to address
those fundamental issues in controlling FES. Firstly, we present our
surface-FES-oriented neuromechanical models of human arms built using
well-accepted, open-source software. The models are designed to capture
significant dynamics in FES controls with minimal computational cost. Our
models are customisable and can be used for testing different control methods.
Secondly, we present the application of reinforcement learning (RL) as a
general method for governing the control rules. In combination, our
customisable models and RL-based control method open the possibility of
delivering customised FES controls for different subjects and settings with
minimal engineering intervention. We demonstrate our approach in planar and 3D
settings.
Related papers
- Programmable Motion Generation for Open-Set Motion Control Tasks [51.73738359209987]
We introduce a new paradigm, programmable motion generation.
In this paradigm, any given motion control task is broken down into a combination of atomic constraints.
These constraints are then programmed into an error function that quantifies the degree to which a motion sequence adheres to them.
arXiv Detail & Related papers (2024-05-29T17:14:55Z) - Learning Exactly Linearizable Deep Dynamics Models [0.07366405857677226]
We propose a learning method for exactly linearizable dynamical models that can easily apply various control theories to ensure stability, reliability, etc.
The proposed model is employed for the real-time control of an automotive engine, and the results demonstrate good predictive performance and stable control under constraints.
arXiv Detail & Related papers (2023-11-30T05:40:55Z) - Self-Supervised Reinforcement Learning that Transfers using Random
Features [41.00256493388967]
We propose a self-supervised reinforcement learning method that enables the transfer of behaviors across tasks with different rewards.
Our method is self-supervised in that it can be trained on offline datasets without reward labels, but can then be quickly deployed on new tasks.
arXiv Detail & Related papers (2023-05-26T20:37:06Z) - Towards AI-controlled FES-restoration of arm movements: Controlling for
progressive muscular fatigue with Gaussian state-space models [6.320141734801679]
Reinforcement Learning (RL) emerges as a promising approach to govern customised control rules for different settings.
Yet, one remaining challenge of controlling FES systems for RL is unobservable muscle fatigue.
We present a method to address the unobservable muscle fatigue issue, allowing our RL controller to achieve higher control performances.
arXiv Detail & Related papers (2023-01-10T14:51:55Z) - Self-Destructing Models: Increasing the Costs of Harmful Dual Uses of
Foundation Models [103.71308117592963]
We present an algorithm for training self-destructing models leveraging techniques from meta-learning and adversarial learning.
In a small-scale experiment, we show MLAC can largely prevent a BERT-style model from being re-purposed to perform gender identification.
arXiv Detail & Related papers (2022-11-27T21:43:45Z) - Neuromuscular Reinforcement Learning to Actuate Human Limbs through FES [7.4769019455423855]
Functional Electrical Stimulation (FES) is a technique to evoke muscle contraction through low-energy electrical signals.
Yet, an open challenge remains on how to apply FES to achieve desired movements.
Here, we learn to control FES through Reinforcement Learning (RL) which can automatically customise the stimulation for the patients.
arXiv Detail & Related papers (2022-09-16T10:41:01Z) - Learning to Walk Autonomously via Reset-Free Quality-Diversity [73.08073762433376]
Quality-Diversity algorithms can discover large and complex behavioural repertoires consisting of both diverse and high-performing skills.
Existing QD algorithms need large numbers of evaluations as well as episodic resets, which require manual human supervision and interventions.
This paper proposes Reset-Free Quality-Diversity optimization (RF-QD) as a step towards autonomous learning for robotics in open-ended environments.
arXiv Detail & Related papers (2022-04-07T14:07:51Z) - OSCAR: Data-Driven Operational Space Control for Adaptive and Robust
Robot Manipulation [50.59541802645156]
Operational Space Control (OSC) has been used as an effective task-space controller for manipulation.
We propose OSC for Adaptation and Robustness (OSCAR), a data-driven variant of OSC that compensates for modeling errors.
We evaluate our method on a variety of simulated manipulation problems, and find substantial improvements over an array of controller baselines.
arXiv Detail & Related papers (2021-10-02T01:21:38Z) - RL-Controller: a reinforcement learning framework for active structural
control [0.0]
We present a novel RL-based approach for designing active controllers by introducing RL-Controller, a flexible and scalable simulation environment.
We show that the proposed framework is easily trainable for a five story benchmark building with 65% reductions on average in inter story drifts.
In a comparative study with LQG active control method, we demonstrate that the proposed model-free algorithm learns more optimal actuator forcing strategies.
arXiv Detail & Related papers (2021-03-13T04:42:13Z) - Improving Input-Output Linearizing Controllers for Bipedal Robots via
Reinforcement Learning [85.13138591433635]
The main drawbacks of input-output linearizing controllers are the need for precise dynamics models and not being able to account for input constraints.
In this paper, we address both challenges for the specific case of bipedal robot control by the use of reinforcement learning techniques.
arXiv Detail & Related papers (2020-04-15T18:15:49Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.