Robot Parkour Learning
- URL: http://arxiv.org/abs/2309.05665v2
- Date: Tue, 12 Sep 2023 03:01:55 GMT
- Title: Robot Parkour Learning
- Authors: Ziwen Zhuang, Zipeng Fu, Jianren Wang, Christopher Atkeson, Soeren
Schwertfeger, Chelsea Finn, Hang Zhao
- Abstract summary: Parkour is a grand challenge for legged locomotion that requires robots to overcome various obstacles rapidly.
We develop a reinforcement learning method inspired by direct collocation to generate parkour skills.
We distill these skills into a single vision-based parkour policy and transfer it to a quadrupedal robot using its egocentric depth camera.
- Score: 70.56172796132368
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Parkour is a grand challenge for legged locomotion that requires robots to
overcome various obstacles rapidly in complex environments. Existing methods
can generate either diverse but blind locomotion skills or vision-based but
specialized skills by using reference animal data or complex rewards. However,
autonomous parkour requires robots to learn generalizable skills that are both
vision-based and diverse to perceive and react to various scenarios. In this
work, we propose a system for learning a single end-to-end vision-based parkour
policy of diverse parkour skills using a simple reward without any reference
motion data. We develop a reinforcement learning method inspired by direct
collocation to generate parkour skills, including climbing over high obstacles,
leaping over large gaps, crawling beneath low barriers, squeezing through thin
slits, and running. We distill these skills into a single vision-based parkour
policy and transfer it to a quadrupedal robot using its egocentric depth
camera. We demonstrate that our system can empower two different low-cost
robots to autonomously select and execute appropriate parkour skills to
traverse challenging real-world environments.
Related papers
- SoloParkour: Constrained Reinforcement Learning for Visual Locomotion from Privileged Experience [19.817578964184147]
Parkour poses a significant challenge for legged robots, requiring navigation through complex environments with agility and precision based on limited sensory inputs.
We introduce a novel method for training end-to-end visual policies, from depth pixels to robot control commands, to achieve agile and safe quadruped locomotion.
We demonstrate the effectiveness of our method on a real Solo-12 robot, showcasing its capability to perform a variety of parkour skills such as walking, climbing, leaping, and crawling.
arXiv Detail & Related papers (2024-09-20T17:39:20Z) - Extreme Parkour with Legged Robots [43.041181063455255]
We show how a single neural net policy operating directly from a camera image can overcome imprecise sensing and actuation.
We show our robot can perform a high jump on obstacles 2x its height, long jump across gaps 2x its length, do a handstand and run across tilted ramps.
arXiv Detail & Related papers (2023-09-25T17:59:55Z) - Barkour: Benchmarking Animal-level Agility with Quadruped Robots [70.97471756305463]
We introduce the Barkour benchmark, an obstacle course to quantify agility for legged robots.
Inspired by dog agility competitions, it consists of diverse obstacles and a time based scoring mechanism.
We present two methods for tackling the benchmark.
arXiv Detail & Related papers (2023-05-24T02:49:43Z) - Legs as Manipulator: Pushing Quadrupedal Agility Beyond Locomotion [34.33972863987201]
We train quadruped robots to use the front legs to climb walls, press buttons, and perform object interaction in the real world.
These skills are trained in simulation using curriculum and transferred to the real world using our proposed sim2real variant.
We evaluate our method in both simulation and real-world showing successful executions of both short as well as long-range tasks.
arXiv Detail & Related papers (2023-03-20T17:59:58Z) - Robust and Versatile Bipedal Jumping Control through Reinforcement
Learning [141.56016556936865]
This work aims to push the limits of agility for bipedal robots by enabling a torque-controlled bipedal robot to perform robust and versatile dynamic jumps in the real world.
We present a reinforcement learning framework for training a robot to accomplish a large variety of jumping tasks, such as jumping to different locations and directions.
We develop a new policy structure that encodes the robot's long-term input/output (I/O) history while also providing direct access to a short-term I/O history.
arXiv Detail & Related papers (2023-02-19T01:06:09Z) - Learning Agile Locomotion via Adversarial Training [59.03007947334165]
In this paper, we present a multi-agent learning system, in which a quadruped robot (protagonist) learns to chase another robot (adversary) while the latter learns to escape.
We find that this adversarial training process not only encourages agile behaviors but also effectively alleviates the laborious environment design effort.
In contrast to prior works that used only one adversary, we find that training an ensemble of adversaries, each of which specializes in a different escaping strategy, is essential for the protagonist to master agility.
arXiv Detail & Related papers (2020-08-03T01:20:37Z) - Learning Agile Robotic Locomotion Skills by Imitating Animals [72.36395376558984]
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics.
We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.
arXiv Detail & Related papers (2020-04-02T02:56:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.