AR3n: A Reinforcement Learning-based Assist-As-Needed Controller for
Robotic Rehabilitation
- URL: http://arxiv.org/abs/2303.00085v4
- Date: Mon, 17 Apr 2023 03:11:45 GMT
- Title: AR3n: A Reinforcement Learning-based Assist-As-Needed Controller for
Robotic Rehabilitation
- Authors: Shrey Pareek, Harris Nisar and Thenkurussi Kesavadas
- Abstract summary: We present AR3n, an assist-as-needed (AAN) controller that utilizes reinforcement learning to supply adaptive assistance during a robot assisted handwriting rehabilitation task.
We propose the use of a virtual patient model to generalize AR3n across multiple subjects.
The system modulates robotic assistance in realtime based on a subject's tracking error, while minimizing the amount of robotic assistance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we present AR3n (pronounced as Aaron), an assist-as-needed
(AAN) controller that utilizes reinforcement learning to supply adaptive
assistance during a robot assisted handwriting rehabilitation task. Unlike
previous AAN controllers, our method does not rely on patient specific
controller parameters or physical models. We propose the use of a virtual
patient model to generalize AR3n across multiple subjects. The system modulates
robotic assistance in realtime based on a subject's tracking error, while
minimizing the amount of robotic assistance. The controller is experimentally
validated through a set of simulations and human subject experiments. Finally,
a comparative study with a traditional rule-based controller is conducted to
analyze differences in assistance mechanisms of the two controllers.
Related papers
- Towards Multi-Morphology Controllers with Diversity and Knowledge Distillation [0.24554686192257422]
We present a pipeline that distills many single-task/single-morphology teacher controllers into a single multi-morphology controller.
The distilled controller scales well with the number of teachers/morphologies and shows emergent properties.
It generalizes to unseen morphologies in a zero-shot manner, providing robustness to morphological perturbations and instant damage recovery.
arXiv Detail & Related papers (2024-04-22T23:40:03Z) - Exploring of Discrete and Continuous Input Control for AI-enhanced
Assistive Robotic Arms [5.371337604556312]
Collaborative robots require users to manage multiple Degrees-of-Freedom (DoFs) for tasks like grasping and manipulating objects.
This study explores three different input devices by integrating them into an established XR framework for assistive robotics.
arXiv Detail & Related papers (2024-01-13T16:57:40Z) - Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from
Offline Data [101.43350024175157]
Self-supervised learning has the potential to decrease the amount of human annotation and engineering effort required to learn control strategies.
Our work builds on prior work showing that the reinforcement learning (RL) itself can be cast as a self-supervised problem.
We demonstrate that a self-supervised RL algorithm based on contrastive learning can solve real-world, image-based robotic manipulation tasks.
arXiv Detail & Related papers (2023-06-06T01:36:56Z) - Leveraging Sequentiality in Reinforcement Learning from a Single
Demonstration [68.94506047556412]
We propose to leverage a sequential bias to learn control policies for complex robotic tasks using a single demonstration.
We show that DCIL-II can solve with unprecedented sample efficiency some challenging simulated tasks such as humanoid locomotion and stand-up.
arXiv Detail & Related papers (2022-11-09T10:28:40Z) - Training Robots without Robots: Deep Imitation Learning for
Master-to-Robot Policy Transfer [4.318590074766604]
Deep imitation learning is promising for robot manipulation because it only requires demonstration samples.
Existing demonstration methods have deficiencies; bilateral teleoperation requires a complex control scheme and is expensive.
This research proposes a new master-to-robot (M2R) policy transfer system that does not require robots for teaching force feedback-based manipulation tasks.
arXiv Detail & Related papers (2022-02-19T10:55:10Z) - Personalized Rehabilitation Robotics based on Online Learning Control [62.6606062732021]
We propose a novel online learning control architecture, which is able to personalize the control force at run time to each individual user.
We evaluate our method in an experimental user study, where the learning controller is shown to provide personalized control, while also obtaining safe interaction forces.
arXiv Detail & Related papers (2021-10-01T15:28:44Z) - Human operator cognitive availability aware Mixed-Initiative control [1.155258942346793]
This paper presents a Cognitive Availability Aware Mixed-Initiative Controller for remotely operated mobile robots.
The controller enables dynamic switching between different levels of autonomy (LOA), initiated by either the AI or the human operator.
The controller is evaluated in a disaster response experiment, in which human operators have to conduct an exploration task with a remote robot.
arXiv Detail & Related papers (2021-08-26T16:21:56Z) - Machine Learning for Mechanical Ventilation Control [52.65490904484772]
We consider the problem of controlling an invasive mechanical ventilator for pressure-controlled ventilation.
A PID controller must let air in and out of a sedated patient's lungs according to a trajectory of airway pressures specified by a clinician.
We show that our controllers are able to track target pressure waveforms significantly better than PID controllers.
arXiv Detail & Related papers (2021-02-12T21:23:33Z) - Learning a Contact-Adaptive Controller for Robust, Efficient Legged
Locomotion [95.1825179206694]
We present a framework that synthesizes robust controllers for a quadruped robot.
A high-level controller learns to choose from a set of primitives in response to changes in the environment.
A low-level controller that utilizes an established control method to robustly execute the primitives.
arXiv Detail & Related papers (2020-09-21T16:49:26Z) - Improving Input-Output Linearizing Controllers for Bipedal Robots via
Reinforcement Learning [85.13138591433635]
The main drawbacks of input-output linearizing controllers are the need for precise dynamics models and not being able to account for input constraints.
In this paper, we address both challenges for the specific case of bipedal robot control by the use of reinforcement learning techniques.
arXiv Detail & Related papers (2020-04-15T18:15:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.