Barkour: Benchmarking Animal-level Agility with Quadruped Robots
- URL: http://arxiv.org/abs/2305.14654v1
- Date: Wed, 24 May 2023 02:49:43 GMT
- Title: Barkour: Benchmarking Animal-level Agility with Quadruped Robots
- Authors: Ken Caluwaerts, Atil Iscen, J. Chase Kew, Wenhao Yu, Tingnan Zhang,
Daniel Freeman, Kuang-Huei Lee, Lisa Lee, Stefano Saliceti, Vincent Zhuang,
Nathan Batchelor, Steven Bohez, Federico Casarini, Jose Enrique Chen, Omar
Cortes, Erwin Coumans, Adil Dostmohamed, Gabriel Dulac-Arnold, Alejandro
Escontrela, Erik Frey, Roland Hafner, Deepali Jain, Bauyrjan Jyenis, Yuheng
Kuang, Edward Lee, Linda Luu, Ofir Nachum, Ken Oslund, Jason Powell, Diego
Reyes, Francesco Romano, Feresteh Sadeghi, Ron Sloat, Baruch Tabanpour,
Daniel Zheng, Michael Neunert, Raia Hadsell, Nicolas Heess, Francesco Nori,
Jeff Seto, Carolina Parada, Vikas Sindhwani, Vincent Vanhoucke, and Jie Tan
- Abstract summary: We introduce the Barkour benchmark, an obstacle course to quantify agility for legged robots.
Inspired by dog agility competitions, it consists of diverse obstacles and a time based scoring mechanism.
We present two methods for tackling the benchmark.
- Score: 70.97471756305463
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Animals have evolved various agile locomotion strategies, such as sprinting,
leaping, and jumping. There is a growing interest in developing legged robots
that move like their biological counterparts and show various agile skills to
navigate complex environments quickly. Despite the interest, the field lacks
systematic benchmarks to measure the performance of control policies and
hardware in agility. We introduce the Barkour benchmark, an obstacle course to
quantify agility for legged robots. Inspired by dog agility competitions, it
consists of diverse obstacles and a time based scoring mechanism. This
encourages researchers to develop controllers that not only move fast, but do
so in a controllable and versatile way. To set strong baselines, we present two
methods for tackling the benchmark. In the first approach, we train specialist
locomotion skills using on-policy reinforcement learning methods and combine
them with a high-level navigation controller. In the second approach, we
distill the specialist skills into a Transformer-based generalist locomotion
policy, named Locomotion-Transformer, that can handle various terrains and
adjust the robot's gait based on the perceived environment and robot states.
Using a custom-built quadruped robot, we demonstrate that our method can
complete the course at half the speed of a dog. We hope that our work
represents a step towards creating controllers that enable robots to reach
animal-level agility.
Related papers
- Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Generalized Animal Imitator: Agile Locomotion with Versatile Motion Prior [14.114972332185044]
This paper introduces the Versatile Motion prior (VIM) - a Reinforcement Learning framework designed to incorporate a range of agile locomotion tasks.
Our framework enables legged robots to learn diverse agile low-level skills by imitating animal motions and manually designed motions.
Our evaluations of the VIM framework span both simulation environments and real-world deployment.
arXiv Detail & Related papers (2023-10-02T17:59:24Z) - Learning and Adapting Agile Locomotion Skills by Transferring Experience [71.8926510772552]
We propose a framework for training complex robotic skills by transferring experience from existing controllers to jumpstart learning new tasks.
We show that our method enables learning complex agile jumping behaviors, navigating to goal locations while walking on hind legs, and adapting to new environments.
arXiv Detail & Related papers (2023-04-19T17:37:54Z) - Legs as Manipulator: Pushing Quadrupedal Agility Beyond Locomotion [34.33972863987201]
We train quadruped robots to use the front legs to climb walls, press buttons, and perform object interaction in the real world.
These skills are trained in simulation using curriculum and transferred to the real world using our proposed sim2real variant.
We evaluate our method in both simulation and real-world showing successful executions of both short as well as long-range tasks.
arXiv Detail & Related papers (2023-03-20T17:59:58Z) - Robust and Versatile Bipedal Jumping Control through Reinforcement
Learning [141.56016556936865]
This work aims to push the limits of agility for bipedal robots by enabling a torque-controlled bipedal robot to perform robust and versatile dynamic jumps in the real world.
We present a reinforcement learning framework for training a robot to accomplish a large variety of jumping tasks, such as jumping to different locations and directions.
We develop a new policy structure that encodes the robot's long-term input/output (I/O) history while also providing direct access to a short-term I/O history.
arXiv Detail & Related papers (2023-02-19T01:06:09Z) - Creating a Dynamic Quadrupedal Robotic Goalkeeper with Reinforcement
Learning [18.873152528330063]
We present a reinforcement learning (RL) framework that enables quadrupedal robots to perform soccer goalkeeping tasks in the real world.
Soccer goalkeeping using quadrupeds is a challenging problem, that combines highly dynamic locomotion with precise and fast non-prehensile object (ball) manipulation.
We deploy the proposed framework on a Mini Cheetah quadrupedal robot and demonstrate the effectiveness of our framework for various agile interceptions of a fast-moving ball in the real world.
arXiv Detail & Related papers (2022-10-10T04:54:55Z) - Learning Agile Locomotion via Adversarial Training [59.03007947334165]
In this paper, we present a multi-agent learning system, in which a quadruped robot (protagonist) learns to chase another robot (adversary) while the latter learns to escape.
We find that this adversarial training process not only encourages agile behaviors but also effectively alleviates the laborious environment design effort.
In contrast to prior works that used only one adversary, we find that training an ensemble of adversaries, each of which specializes in a different escaping strategy, is essential for the protagonist to master agility.
arXiv Detail & Related papers (2020-08-03T01:20:37Z) - Learning Agile Robotic Locomotion Skills by Imitating Animals [72.36395376558984]
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics.
We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.
arXiv Detail & Related papers (2020-04-02T02:56:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.