Extreme Parkour with Legged Robots
- URL: http://arxiv.org/abs/2309.14341v1
- Date: Mon, 25 Sep 2023 17:59:55 GMT
- Title: Extreme Parkour with Legged Robots
- Authors: Xuxin Cheng, Kexin Shi, Ananye Agarwal, Deepak Pathak
- Abstract summary: We show how a single neural net policy operating directly from a camera image can overcome imprecise sensing and actuation.
We show our robot can perform a high jump on obstacles 2x its height, long jump across gaps 2x its length, do a handstand and run across tilted ramps.
- Score: 43.041181063455255
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans can perform parkour by traversing obstacles in a highly dynamic
fashion requiring precise eye-muscle coordination and movement. Getting robots
to do the same task requires overcoming similar challenges. Classically, this
is done by independently engineering perception, actuation, and control systems
to very low tolerances. This restricts them to tightly controlled settings such
as a predetermined obstacle course in labs. In contrast, humans are able to
learn parkour through practice without significantly changing their underlying
biology. In this paper, we take a similar approach to developing robot parkour
on a small low-cost robot with imprecise actuation and a single front-facing
depth camera for perception which is low-frequency, jittery, and prone to
artifacts. We show how a single neural net policy operating directly from a
camera image, trained in simulation with large-scale RL, can overcome imprecise
sensing and actuation to output highly precise control behavior end-to-end. We
show our robot can perform a high jump on obstacles 2x its height, long jump
across gaps 2x its length, do a handstand and run across tilted ramps, and
generalize to novel obstacle courses with different physical properties.
Parkour videos at https://extreme-parkour.github.io/
Related papers
- SoloParkour: Constrained Reinforcement Learning for Visual Locomotion from Privileged Experience [19.817578964184147]
Parkour poses a significant challenge for legged robots, requiring navigation through complex environments with agility and precision based on limited sensory inputs.
We introduce a novel method for training end-to-end visual policies, from depth pixels to robot control commands, to achieve agile and safe quadruped locomotion.
We demonstrate the effectiveness of our method on a real Solo-12 robot, showcasing its capability to perform a variety of parkour skills such as walking, climbing, leaping, and crawling.
arXiv Detail & Related papers (2024-09-20T17:39:20Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Robot Parkour Learning [70.56172796132368]
Parkour is a grand challenge for legged locomotion that requires robots to overcome various obstacles rapidly.
We develop a reinforcement learning method inspired by direct collocation to generate parkour skills.
We distill these skills into a single vision-based parkour policy and transfer it to a quadrupedal robot using its egocentric depth camera.
arXiv Detail & Related papers (2023-09-11T17:59:17Z) - Barkour: Benchmarking Animal-level Agility with Quadruped Robots [70.97471756305463]
We introduce the Barkour benchmark, an obstacle course to quantify agility for legged robots.
Inspired by dog agility competitions, it consists of diverse obstacles and a time based scoring mechanism.
We present two methods for tackling the benchmark.
arXiv Detail & Related papers (2023-05-24T02:49:43Z) - Legs as Manipulator: Pushing Quadrupedal Agility Beyond Locomotion [34.33972863987201]
We train quadruped robots to use the front legs to climb walls, press buttons, and perform object interaction in the real world.
These skills are trained in simulation using curriculum and transferred to the real world using our proposed sim2real variant.
We evaluate our method in both simulation and real-world showing successful executions of both short as well as long-range tasks.
arXiv Detail & Related papers (2023-03-20T17:59:58Z) - Legged Locomotion in Challenging Terrains using Egocentric Vision [70.37554680771322]
We present the first end-to-end locomotion system capable of traversing stairs, curbs, stepping stones, and gaps.
We show this result on a medium-sized quadruped robot using a single front-facing depth camera.
arXiv Detail & Related papers (2022-11-14T18:59:58Z) - Learning fast and agile quadrupedal locomotion over complex terrain [0.3806109052869554]
We propose a robust controller that achieves natural and stably fast locomotion on a real blind quadruped robot.
The controller is trained in the simulation environment by model-free reinforcement learning.
Our controller has excellent anti-disturbance performance, and has good generalization ability to reach locomotion speeds it has never learned.
arXiv Detail & Related papers (2022-07-02T11:20:07Z) - Learning Quadrupedal Locomotion over Challenging Terrain [68.51539602703662]
Legged locomotion can dramatically expand the operational domains of robotics.
Conventional controllers for legged locomotion are based on elaborate state machines that explicitly trigger the execution of motion primitives and reflexes.
Here we present a radically robust controller for legged locomotion in challenging natural environments.
arXiv Detail & Related papers (2020-10-21T19:11:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.