Reinforcement Learning for Robust Parameterized Locomotion Control of
Bipedal Robots
- URL: http://arxiv.org/abs/2103.14295v1
- Date: Fri, 26 Mar 2021 07:14:01 GMT
- Title: Reinforcement Learning for Robust Parameterized Locomotion Control of
Bipedal Robots
- Authors: Zhongyu Li, Xuxin Cheng, Xue Bin Peng, Pieter Abbeel, Sergey Levine,
Glen Berseth, Koushil Sreenath
- Abstract summary: We present a model-free reinforcement learning framework for training robust locomotion policies in simulation.
domain randomization is used to encourage the policies to learn behaviors that are robust across variations in system dynamics.
We demonstrate this on versatile walking behaviors such as tracking a target walking velocity, walking height, and turning yaw.
- Score: 121.42930679076574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Developing robust walking controllers for bipedal robots is a challenging
endeavor. Traditional model-based locomotion controllers require simplifying
assumptions and careful modelling; any small errors can result in unstable
control. To address these challenges for bipedal locomotion, we present a
model-free reinforcement learning framework for training robust locomotion
policies in simulation, which can then be transferred to a real bipedal Cassie
robot. To facilitate sim-to-real transfer, domain randomization is used to
encourage the policies to learn behaviors that are robust across variations in
system dynamics. The learned policies enable Cassie to perform a set of diverse
and dynamic behaviors, while also being more robust than traditional
controllers and prior learning-based methods that use residual control. We
demonstrate this on versatile walking behaviors such as tracking a target
walking velocity, walking height, and turning yaw.
Related papers
- BiRoDiff: Diffusion policies for bipedal robot locomotion on unseen terrains [0.9480364746270075]
Locomotion on unknown terrains is essential for bipedal robots to handle novel real-world challenges.
We introduce a lightweight framework that learns a single walking controller that yields locomotion on multiple terrains.
arXiv Detail & Related papers (2024-07-07T16:03:33Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Learning and Adapting Agile Locomotion Skills by Transferring Experience [71.8926510772552]
We propose a framework for training complex robotic skills by transferring experience from existing controllers to jumpstart learning new tasks.
We show that our method enables learning complex agile jumping behaviors, navigating to goal locations while walking on hind legs, and adapting to new environments.
arXiv Detail & Related papers (2023-04-19T17:37:54Z) - Learning to Walk by Steering: Perceptive Quadrupedal Locomotion in
Dynamic Environments [25.366480092589022]
A quadrupedal robot must exhibit robust and agile walking behaviors in response to environmental clutter and moving obstacles.
We present a hierarchical learning framework, named PRELUDE, which decomposes the problem of perceptive locomotion into high-level decision-making.
We demonstrate the effectiveness of our approach in simulation and with hardware experiments.
arXiv Detail & Related papers (2022-09-19T17:55:07Z) - An Adaptable Approach to Learn Realistic Legged Locomotion without
Examples [38.81854337592694]
This work proposes a generic approach for ensuring realism in locomotion by guiding the learning process with the spring-loaded inverted pendulum model as a reference.
We present experimental results showing that even in a model-free setup, the learned policies can generate realistic and energy-efficient locomotion gaits for a bipedal and a quadrupedal robot.
arXiv Detail & Related papers (2021-10-28T10:14:47Z) - Neural Dynamic Policies for End-to-End Sensorimotor Learning [51.24542903398335]
The current dominant paradigm in sensorimotor control, whether imitation or reinforcement learning, is to train policies directly in raw action spaces.
We propose Neural Dynamic Policies (NDPs) that make predictions in trajectory distribution space.
NDPs outperform the prior state-of-the-art in terms of either efficiency or performance across several robotic control tasks.
arXiv Detail & Related papers (2020-12-04T18:59:32Z) - Learning Quadrupedal Locomotion over Challenging Terrain [68.51539602703662]
Legged locomotion can dramatically expand the operational domains of robotics.
Conventional controllers for legged locomotion are based on elaborate state machines that explicitly trigger the execution of motion primitives and reflexes.
Here we present a radically robust controller for legged locomotion in challenging natural environments.
arXiv Detail & Related papers (2020-10-21T19:11:20Z) - Learning Agile Robotic Locomotion Skills by Imitating Animals [72.36395376558984]
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics.
We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.
arXiv Detail & Related papers (2020-04-02T02:56:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.