Learning Agile Robotic Locomotion Skills by Imitating Animals
- URL: http://arxiv.org/abs/2004.00784v3
- Date: Tue, 21 Jul 2020 00:59:24 GMT
- Title: Learning Agile Robotic Locomotion Skills by Imitating Animals
- Authors: Xue Bin Peng, Erwin Coumans, Tingnan Zhang, Tsang-Wei Lee, Jie Tan,
Sergey Levine
- Abstract summary: Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics.
We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.
- Score: 72.36395376558984
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reproducing the diverse and agile locomotion skills of animals has been a
longstanding challenge in robotics. While manually-designed controllers have
been able to emulate many complex behaviors, building such controllers involves
a time-consuming and difficult development process, often requiring substantial
expertise of the nuances of each skill. Reinforcement learning provides an
appealing alternative for automating the manual effort involved in the
development of controllers. However, designing learning objectives that elicit
the desired behaviors from an agent can also require a great deal of
skill-specific expertise. In this work, we present an imitation learning system
that enables legged robots to learn agile locomotion skills by imitating
real-world animals. We show that by leveraging reference motion data, a single
learning-based approach is able to automatically synthesize controllers for a
diverse repertoire behaviors for legged robots. By incorporating sample
efficient domain adaptation techniques into the training process, our system is
able to learn adaptive policies in simulation that can then be quickly adapted
for real-world deployment. To demonstrate the effectiveness of our system, we
train an 18-DoF quadruped robot to perform a variety of agile behaviors ranging
from different locomotion gaits to dynamic hops and turns.
Related papers
- Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Generalized Animal Imitator: Agile Locomotion with Versatile Motion Prior [14.114972332185044]
This paper introduces the Versatile Motion prior (VIM) - a Reinforcement Learning framework designed to incorporate a range of agile locomotion tasks.
Our framework enables legged robots to learn diverse agile low-level skills by imitating animal motions and manually designed motions.
Our evaluations of the VIM framework span both simulation environments and real-world deployment.
arXiv Detail & Related papers (2023-10-02T17:59:24Z) - Barkour: Benchmarking Animal-level Agility with Quadruped Robots [70.97471756305463]
We introduce the Barkour benchmark, an obstacle course to quantify agility for legged robots.
Inspired by dog agility competitions, it consists of diverse obstacles and a time based scoring mechanism.
We present two methods for tackling the benchmark.
arXiv Detail & Related papers (2023-05-24T02:49:43Z) - Learning and Adapting Agile Locomotion Skills by Transferring Experience [71.8926510772552]
We propose a framework for training complex robotic skills by transferring experience from existing controllers to jumpstart learning new tasks.
We show that our method enables learning complex agile jumping behaviors, navigating to goal locations while walking on hind legs, and adapting to new environments.
arXiv Detail & Related papers (2023-04-19T17:37:54Z) - Adaptation of Quadruped Robot Locomotion with Meta-Learning [64.71260357476602]
We demonstrate that meta-reinforcement learning can be used to successfully train a robot capable to solve a wide range of locomotion tasks.
The performance of the meta-trained robot is similar to that of a robot that is trained on a single task.
arXiv Detail & Related papers (2021-07-08T10:37:18Z) - Towards General and Autonomous Learning of Core Skills: A Case Study in
Locomotion [19.285099263193622]
We develop a learning framework that can learn sophisticated locomotion behavior for a wide spectrum of legged robots.
Our learning framework relies on a data-efficient, off-policy multi-task RL algorithm and a small set of reward functions that are semantically identical across robots.
For nine different types of robots, including a real-world quadruped robot, we demonstrate that the same algorithm can rapidly learn diverse and reusable locomotion skills.
arXiv Detail & Related papers (2020-08-06T08:23:55Z) - Learning Agile Locomotion via Adversarial Training [59.03007947334165]
In this paper, we present a multi-agent learning system, in which a quadruped robot (protagonist) learns to chase another robot (adversary) while the latter learns to escape.
We find that this adversarial training process not only encourages agile behaviors but also effectively alleviates the laborious environment design effort.
In contrast to prior works that used only one adversary, we find that training an ensemble of adversaries, each of which specializes in a different escaping strategy, is essential for the protagonist to master agility.
arXiv Detail & Related papers (2020-08-03T01:20:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.