Learning Control Policies for Fall prevention and safety in bipedal
locomotion
- URL: http://arxiv.org/abs/2201.01361v1
- Date: Tue, 4 Jan 2022 22:00:21 GMT
- Title: Learning Control Policies for Fall prevention and safety in bipedal
locomotion
- Authors: Visak Kumar
- Abstract summary: We develop learning-based algorithms capable of synthesizing push recovery control policies for two different kinds of robots.
Our work can be branched into two closely related directions : 1) Learning safe falling and fall prevention strategies for humanoid robots and 2) Learning fall prevention strategies for humans using a robotic assistive devices.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The ability to recover from an unexpected external perturbation is a
fundamental motor skill in bipedal locomotion. An effective response includes
the ability to not just recover balance and maintain stability but also to fall
in a safe manner when balance recovery is physically infeasible. For robots
associated with bipedal locomotion, such as humanoid robots and assistive
robotic devices that aid humans in walking, designing controllers which can
provide this stability and safety can prevent damage to robots or prevent
injury related medical costs. This is a challenging task because it involves
generating highly dynamic motion for a high-dimensional, non-linear and
under-actuated system with contacts. Despite prior advancements in using
model-based and optimization methods, challenges such as requirement of
extensive domain knowledge, relatively large computational time and limited
robustness to changes in dynamics still make this an open problem. In this
thesis, to address these issues we develop learning-based algorithms capable of
synthesizing push recovery control policies for two different kinds of robots :
Humanoid robots and assistive robotic devices that assist in bipedal
locomotion. Our work can be branched into two closely related directions : 1)
Learning safe falling and fall prevention strategies for humanoid robots and 2)
Learning fall prevention strategies for humans using a robotic assistive
devices. To achieve this, we introduce a set of Deep Reinforcement Learning
(DRL) algorithms to learn control policies that improve safety while using
these robots.
Related papers
- Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Deception Game: Closing the Safety-Learning Loop in Interactive Robot
Autonomy [7.915956857741506]
Existing safety methods often neglect the robot's ability to learn and adapt at runtime, leading to overly conservative behavior.
This paper proposes a new closed-loop paradigm for synthesizing safe control policies that explicitly account for the robot's evolving uncertainty.
arXiv Detail & Related papers (2023-09-03T20:34:01Z) - Quality-Diversity Optimisation on a Physical Robot Through
Dynamics-Aware and Reset-Free Learning [4.260312058817663]
We build upon the Reset-Free QD (RF-QD) algorithm to learn controllers directly on a physical robot.
This method uses a dynamics model, learned from interactions between the robot and the environment, to predict the robot's behaviour.
RF-QD also includes a recovery policy that returns the robot to a safe zone when it has walked outside of it, allowing continuous learning.
arXiv Detail & Related papers (2023-04-24T13:24:00Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - Adversarial joint attacks on legged robots [3.480626767752489]
We address adversarial attacks on the actuators at the joints of legged robots trained by deep reinforcement learning.
In this study, we demonstrate that the adversarial perturbations to the torque control signals of the actuators can significantly reduce the rewards and cause walking instability in robots.
arXiv Detail & Related papers (2022-05-20T11:30:23Z) - Reinforcement Learning for Robust Parameterized Locomotion Control of
Bipedal Robots [121.42930679076574]
We present a model-free reinforcement learning framework for training robust locomotion policies in simulation.
domain randomization is used to encourage the policies to learn behaviors that are robust across variations in system dynamics.
We demonstrate this on versatile walking behaviors such as tracking a target walking velocity, walking height, and turning yaw.
arXiv Detail & Related papers (2021-03-26T07:14:01Z) - Learning Locomotion Skills in Evolvable Robots [10.167123492952694]
We introduce a controller architecture and a generic learning method to allow a modular robot with an arbitrary shape to learn to walk towards a target and follow this target if it moves.
Our approach is validated on three robots, a spider, a gecko, and their offspring, in three real-world scenarios.
arXiv Detail & Related papers (2020-10-19T14:01:50Z) - Learning Agile Robotic Locomotion Skills by Imitating Animals [72.36395376558984]
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics.
We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.
arXiv Detail & Related papers (2020-04-02T02:56:16Z) - Learning to Walk in the Real World with Minimal Human Effort [80.7342153519654]
We develop a system for learning legged locomotion policies with deep RL in the real world with minimal human effort.
Our system can automatically and efficiently learn locomotion skills on a Minitaur robot with little human intervention.
arXiv Detail & Related papers (2020-02-20T03:36:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.