Legged Locomotion in Challenging Terrains using Egocentric Vision
- URL: http://arxiv.org/abs/2211.07638v1
- Date: Mon, 14 Nov 2022 18:59:58 GMT
- Title: Legged Locomotion in Challenging Terrains using Egocentric Vision
- Authors: Ananye Agarwal, Ashish Kumar, Jitendra Malik, Deepak Pathak
- Abstract summary: We present the first end-to-end locomotion system capable of traversing stairs, curbs, stepping stones, and gaps.
We show this result on a medium-sized quadruped robot using a single front-facing depth camera.
- Score: 70.37554680771322
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Animals are capable of precise and agile locomotion using vision. Replicating
this ability has been a long-standing goal in robotics. The traditional
approach has been to decompose this problem into elevation mapping and foothold
planning phases. The elevation mapping, however, is susceptible to failure and
large noise artifacts, requires specialized hardware, and is biologically
implausible. In this paper, we present the first end-to-end locomotion system
capable of traversing stairs, curbs, stepping stones, and gaps. We show this
result on a medium-sized quadruped robot using a single front-facing depth
camera. The small size of the robot necessitates discovering specialized gait
patterns not seen elsewhere. The egocentric camera requires the policy to
remember past information to estimate the terrain under its hind feet. We train
our policy in simulation. Training has two phases - first, we train a policy
using reinforcement learning with a cheap-to-compute variant of depth image and
then in phase 2 distill it into the final policy that uses depth using
supervised learning. The resulting policy transfers to the real world and is
able to run in real-time on the limited compute of the robot. It can traverse a
large variety of terrain while being robust to perturbations like pushes,
slippery surfaces, and rocky terrain. Videos are at
https://vision-locomotion.github.io
Related papers
- Learning Humanoid Locomotion over Challenging Terrain [84.35038297708485]
We present a learning-based approach for blind humanoid locomotion capable of traversing challenging natural and man-made terrains.
Our model is first pre-trained on a dataset of flat-ground trajectories with sequence modeling, and then fine-tuned on uneven terrain using reinforcement learning.
We evaluate our model on a real humanoid robot across a variety of terrains, including rough, deformable, and sloped surfaces.
arXiv Detail & Related papers (2024-10-04T17:57:09Z) - Gaitor: Learning a Unified Representation Across Gaits for Real-World Quadruped Locomotion [61.01039626207952]
We present Gaitor, which learns a disentangled and 2D representation across locomotion gaits.
Gaitor's latent space is readily interpretable and we discover that during gait transitions, novel unseen gaits emerge.
We evaluate Gaitor in both simulation and the real world on the ANYmal C platform.
arXiv Detail & Related papers (2024-05-29T19:02:57Z) - Barkour: Benchmarking Animal-level Agility with Quadruped Robots [70.97471756305463]
We introduce the Barkour benchmark, an obstacle course to quantify agility for legged robots.
Inspired by dog agility competitions, it consists of diverse obstacles and a time based scoring mechanism.
We present two methods for tackling the benchmark.
arXiv Detail & Related papers (2023-05-24T02:49:43Z) - Legs as Manipulator: Pushing Quadrupedal Agility Beyond Locomotion [34.33972863987201]
We train quadruped robots to use the front legs to climb walls, press buttons, and perform object interaction in the real world.
These skills are trained in simulation using curriculum and transferred to the real world using our proposed sim2real variant.
We evaluate our method in both simulation and real-world showing successful executions of both short as well as long-range tasks.
arXiv Detail & Related papers (2023-03-20T17:59:58Z) - Advanced Skills by Learning Locomotion and Local Navigation End-to-End [10.872193480485596]
In this work, we propose to solve the complete problem by training an end-to-end policy with deep reinforcement learning.
We demonstrate the successful deployment of policies on a real quadrupedal robot.
arXiv Detail & Related papers (2022-09-26T16:35:00Z) - Neural Scene Representation for Locomotion on Structured Terrain [56.48607865960868]
We propose a learning-based method to reconstruct the local terrain for a mobile robot traversing urban environments.
Using a stream of depth measurements from the onboard cameras and the robot's trajectory, the estimates the topography in the robot's vicinity.
We propose a 3D reconstruction model that faithfully reconstructs the scene, despite the noisy measurements and large amounts of missing data coming from the blind spots of the camera arrangement.
arXiv Detail & Related papers (2022-06-16T10:45:17Z) - Coupling Vision and Proprioception for Navigation of Legged Robots [65.59559699815512]
We exploit the complementary strengths of vision and proprioception to achieve point goal navigation in a legged robot.
We show superior performance compared to wheeled robot (LoCoBot) baselines.
We also show the real-world deployment of our system on a quadruped robot with onboard sensors and compute.
arXiv Detail & Related papers (2021-12-03T18:59:59Z) - Quadruped Locomotion on Non-Rigid Terrain using Reinforcement Learning [10.729374293332281]
We present a novel reinforcement learning framework for learning locomotion on non-rigid dynamic terrains.
A trained robot with 55cm base length can walk on terrain that can sink up to 5cm.
We show the effectiveness of our method by training the robot with various terrain conditions.
arXiv Detail & Related papers (2021-07-07T00:34:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.