Legged Locomotion in Challenging Terrains using Egocentric Vision
- URL: http://arxiv.org/abs/2211.07638v1
- Date: Mon, 14 Nov 2022 18:59:58 GMT
- Title: Legged Locomotion in Challenging Terrains using Egocentric Vision
- Authors: Ananye Agarwal, Ashish Kumar, Jitendra Malik, Deepak Pathak
- Abstract summary: We present the first end-to-end locomotion system capable of traversing stairs, curbs, stepping stones, and gaps.
We show this result on a medium-sized quadruped robot using a single front-facing depth camera.
- Score: 70.37554680771322
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Animals are capable of precise and agile locomotion using vision. Replicating
this ability has been a long-standing goal in robotics. The traditional
approach has been to decompose this problem into elevation mapping and foothold
planning phases. The elevation mapping, however, is susceptible to failure and
large noise artifacts, requires specialized hardware, and is biologically
implausible. In this paper, we present the first end-to-end locomotion system
capable of traversing stairs, curbs, stepping stones, and gaps. We show this
result on a medium-sized quadruped robot using a single front-facing depth
camera. The small size of the robot necessitates discovering specialized gait
patterns not seen elsewhere. The egocentric camera requires the policy to
remember past information to estimate the terrain under its hind feet. We train
our policy in simulation. Training has two phases - first, we train a policy
using reinforcement learning with a cheap-to-compute variant of depth image and
then in phase 2 distill it into the final policy that uses depth using
supervised learning. The resulting policy transfers to the real world and is
able to run in real-time on the limited compute of the robot. It can traverse a
large variety of terrain while being robust to perturbations like pushes,
slippery surfaces, and rocky terrain. Videos are at
https://vision-locomotion.github.io
Related papers
- HumanPlus: Humanoid Shadowing and Imitation from Humans [82.47551890765202]
We introduce a full-stack system for humanoids to learn motion and autonomous skills from human data.
We first train a low-level policy in simulation via reinforcement learning using existing 40-hour human motion datasets.
We then perform supervised behavior cloning to train skill policies using egocentric vision, allowing humanoids to complete different tasks autonomously.
arXiv Detail & Related papers (2024-06-15T00:41:34Z) - Barkour: Benchmarking Animal-level Agility with Quadruped Robots [70.97471756305463]
We introduce the Barkour benchmark, an obstacle course to quantify agility for legged robots.
Inspired by dog agility competitions, it consists of diverse obstacles and a time based scoring mechanism.
We present two methods for tackling the benchmark.
arXiv Detail & Related papers (2023-05-24T02:49:43Z) - Neural Volumetric Memory for Visual Locomotion Control [11.871849736648237]
In this work, we consider the difficult problem of locomotion on challenging terrains using a single forward-facing depth camera.
To solve this problem, we follow the paradigm in computer vision that explicitly models the 3D geometry of the scene.
We show that our approach, which explicitly introduces geometric priors during training, offers superior performance than more na"ive methods.
arXiv Detail & Related papers (2023-04-03T17:59:56Z) - Legs as Manipulator: Pushing Quadrupedal Agility Beyond Locomotion [34.33972863987201]
We train quadruped robots to use the front legs to climb walls, press buttons, and perform object interaction in the real world.
These skills are trained in simulation using curriculum and transferred to the real world using our proposed sim2real variant.
We evaluate our method in both simulation and real-world showing successful executions of both short as well as long-range tasks.
arXiv Detail & Related papers (2023-03-20T17:59:58Z) - Advanced Skills by Learning Locomotion and Local Navigation End-to-End [10.872193480485596]
In this work, we propose to solve the complete problem by training an end-to-end policy with deep reinforcement learning.
We demonstrate the successful deployment of policies on a real quadrupedal robot.
arXiv Detail & Related papers (2022-09-26T16:35:00Z) - Neural Scene Representation for Locomotion on Structured Terrain [56.48607865960868]
We propose a learning-based method to reconstruct the local terrain for a mobile robot traversing urban environments.
Using a stream of depth measurements from the onboard cameras and the robot's trajectory, the estimates the topography in the robot's vicinity.
We propose a 3D reconstruction model that faithfully reconstructs the scene, despite the noisy measurements and large amounts of missing data coming from the blind spots of the camera arrangement.
arXiv Detail & Related papers (2022-06-16T10:45:17Z) - Coupling Vision and Proprioception for Navigation of Legged Robots [65.59559699815512]
We exploit the complementary strengths of vision and proprioception to achieve point goal navigation in a legged robot.
We show superior performance compared to wheeled robot (LoCoBot) baselines.
We also show the real-world deployment of our system on a quadruped robot with onboard sensors and compute.
arXiv Detail & Related papers (2021-12-03T18:59:59Z) - Quadruped Locomotion on Non-Rigid Terrain using Reinforcement Learning [10.729374293332281]
We present a novel reinforcement learning framework for learning locomotion on non-rigid dynamic terrains.
A trained robot with 55cm base length can walk on terrain that can sink up to 5cm.
We show the effectiveness of our method by training the robot with various terrain conditions.
arXiv Detail & Related papers (2021-07-07T00:34:23Z) - Learning to Walk in the Real World with Minimal Human Effort [80.7342153519654]
We develop a system for learning legged locomotion policies with deep RL in the real world with minimal human effort.
Our system can automatically and efficiently learn locomotion skills on a Minitaur robot with little human intervention.
arXiv Detail & Related papers (2020-02-20T03:36:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.