A Reinforcement Learning Based Controller to Minimize Forces on the
Crutches of a Lower-Limb Exoskeleton
- URL: http://arxiv.org/abs/2402.00135v1
- Date: Wed, 31 Jan 2024 19:20:56 GMT
- Title: A Reinforcement Learning Based Controller to Minimize Forces on the
Crutches of a Lower-Limb Exoskeleton
- Authors: Aydin Emre Utku, Suzan Ece Ada, Muhammet Hatipoglu, Mustafa Derman,
Emre Ugur and Evren Samur
- Abstract summary: We use deep reinforcement learning to develop a controller that minimizes ground reaction forces (GRF) on crutches.
We formulate a reward function to encourage the forward displacement of a human-exoskeleton system.
We empirically show that our learning model can generate joint torques based on the joint angle, velocities, and the GRF on the feet and crutch tips.
- Score: 1.4680035572775536
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Metabolic energy consumption of a powered lower-limb exoskeleton user mainly
comes from the upper body effort since the lower body is considered to be
passive. However, the upper body effort of the users is largely ignored in the
literature when designing motion controllers. In this work, we use deep
reinforcement learning to develop a locomotion controller that minimizes ground
reaction forces (GRF) on crutches. The rationale for minimizing GRF is to
reduce the upper body effort of the user. Accordingly, we design a model and a
learning framework for a human-exoskeleton system with crutches. We formulate a
reward function to encourage the forward displacement of a human-exoskeleton
system while satisfying the predetermined constraints of a physical robot. We
evaluate our new framework using Proximal Policy Optimization, a
state-of-the-art deep reinforcement learning (RL) method, on the MuJoCo physics
simulator with different hyperparameters and network architectures over
multiple trials. We empirically show that our learning model can generate joint
torques based on the joint angle, velocities, and the GRF on the feet and
crutch tips. The resulting exoskeleton model can directly generate joint
torques from states in line with the RL framework. Finally, we empirically show
that policy trained using our method can generate a gait with a 35% reduction
in GRF with respect to the baseline.
Related papers
- HOMIE: Humanoid Loco-Manipulation with Isomorphic Exoskeleton Cockpit [52.12750762494588]
Current humanoid teleoperation systems either lack reliable low-level control policies, or struggle to acquire accurate whole-body control commands.
We propose a novel humanoid teleoperation cockpit integrates a humanoid loco-manipulation policy and a low-cost exoskeleton-based hardware system.
arXiv Detail & Related papers (2025-02-18T16:33:38Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Reaching the Limit in Autonomous Racing: Optimal Control versus
Reinforcement Learning [66.10854214036605]
A central question in robotics is how to design a control system for an agile mobile robot.
We show that a neural network controller trained with reinforcement learning (RL) outperformed optimal control (OC) methods in this setting.
Our findings allowed us to push an agile drone to its maximum performance, achieving a peak acceleration greater than 12 times the gravitational acceleration and a peak velocity of 108 kilometers per hour.
arXiv Detail & Related papers (2023-10-17T02:40:27Z) - Advancements in Upper Body Exoskeleton: Implementing Active Gravity
Compensation with a Feedforward Controller [0.0]
We present a feedforward control system designed for active gravity compensation on an upper body exoskeleton.
The system utilizes only positional data from internal motor sensors to calculate torque, employing analytical control equations based on Newton-Euler Inverse Dynamics.
arXiv Detail & Related papers (2023-09-09T06:39:38Z) - Low-Rank Modular Reinforcement Learning via Muscle Synergy [25.120547719120765]
Modular Reinforcement Learning (RL) decentralizes the control of multi-joint robots by learning policies for each actuator.
We propose a Synergy-Oriented LeARning (SOLAR) framework that exploits the redundant nature of DoF in robot control.
arXiv Detail & Related papers (2022-10-26T16:01:31Z) - Deep Whole-Body Control: Learning a Unified Policy for Manipulation and
Locomotion [25.35885216505385]
An attached arm can significantly increase the applicability of legged robots to mobile manipulation tasks.
Standard hierarchical control pipeline for such legged manipulators is to decouple the controller into that of manipulation and locomotion.
We learn a unified policy for whole-body control of a legged manipulator using reinforcement learning.
arXiv Detail & Related papers (2022-10-18T17:59:30Z) - Learning to Estimate External Forces of Human Motion in Video [22.481658922173906]
Ground reaction forces (GRFs) are exerted by the human body during certain movements.
Standard practice uses physical markers paired with force plates in a controlled environment.
We propose GRF inference from video.
arXiv Detail & Related papers (2022-07-12T21:20:47Z) - Adapting Rapid Motor Adaptation for Bipedal Robots [73.5914982741483]
We leverage recent advances in rapid adaptation for locomotion control, and extend them to work on bipedal robots.
A-RMA adapts the base policy for the imperfect extrinsics estimator by finetuning it using model-free RL.
We demonstrate that A-RMA outperforms a number of RL-based baseline controllers and model-based controllers in simulation.
arXiv Detail & Related papers (2022-05-30T17:59:09Z) - GLiDE: Generalizable Quadrupedal Locomotion in Diverse Environments with
a Centroidal Model [18.66472547798549]
We show how model-free reinforcement learning can be effectively used with a centroidal model to generate robust control policies for quadrupedal locomotion.
We show the potential of the method by demonstrating stepping-stone locomotion, two-legged in-place balance, balance beam locomotion, and sim-to-real transfer without further adaptations.
arXiv Detail & Related papers (2021-04-20T05:55:13Z) - Online Body Schema Adaptation through Cost-Sensitive Active Learning [63.84207660737483]
The work was implemented in a simulation environment, using the 7DoF arm of the iCub robot simulator.
A cost-sensitive active learning approach is used to select optimal joint configurations.
The results show cost-sensitive active learning has similar accuracy to the standard active learning approach, while reducing in about half the executed movement.
arXiv Detail & Related papers (2021-01-26T16:01:02Z) - Learning a Contact-Adaptive Controller for Robust, Efficient Legged
Locomotion [95.1825179206694]
We present a framework that synthesizes robust controllers for a quadruped robot.
A high-level controller learns to choose from a set of primitives in response to changes in the environment.
A low-level controller that utilizes an established control method to robustly execute the primitives.
arXiv Detail & Related papers (2020-09-21T16:49:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.