Learning Semantics-Aware Locomotion Skills from Human Demonstration
- URL: http://arxiv.org/abs/2206.13631v1
- Date: Mon, 27 Jun 2022 21:08:03 GMT
- Title: Learning Semantics-Aware Locomotion Skills from Human Demonstration
- Authors: Yuxiang Yang, Xiangyun Meng, Wenhao Yu, Tingnan Zhang, Jie Tan, Byron
Boots
- Abstract summary: We present a framework that learns semantics-aware locomotion skills from perception for quadrupedal robots.
Our framework learns to adjust the speed and gait of the robot based on perceived terrain semantics, and enables the robot to walk over 6km without failure.
- Score: 35.996425893483796
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The semantics of the environment, such as the terrain type and property,
reveals important information for legged robots to adjust their behaviors. In
this work, we present a framework that learns semantics-aware locomotion skills
from perception for quadrupedal robots, such that the robot can traverse
through complex offroad terrains with appropriate speeds and gaits using
perception information. Due to the lack of high-fidelity outdoor simulation,
our framework needs to be trained directly in the real world, which brings
unique challenges in data efficiency and safety. To ensure sample efficiency,
we pre-train the perception model with an off-road driving dataset. To avoid
the risks of real-world policy exploration, we leverage human demonstration to
train a speed policy that selects a desired forward speed from camera image.
For maximum traversability, we pair the speed policy with a gait selector,
which selects a robust locomotion gait for each forward speed. Using only 40
minutes of human demonstration data, our framework learns to adjust the speed
and gait of the robot based on perceived terrain semantics, and enables the
robot to walk over 6km without failure at close-to-optimal speed.
Related papers
- SoloParkour: Constrained Reinforcement Learning for Visual Locomotion from Privileged Experience [19.817578964184147]
Parkour poses a significant challenge for legged robots, requiring navigation through complex environments with agility and precision based on limited sensory inputs.
We introduce a novel method for training end-to-end visual policies, from depth pixels to robot control commands, to achieve agile and safe quadruped locomotion.
We demonstrate the effectiveness of our method on a real Solo-12 robot, showcasing its capability to perform a variety of parkour skills such as walking, climbing, leaping, and crawling.
arXiv Detail & Related papers (2024-09-20T17:39:20Z) - Hybrid Internal Model: Learning Agile Legged Locomotion with Simulated
Robot Response [25.52492911765911]
We introduce Hybrid Internal Model to estimate external states according to the response of the robot.
The response, which we refer to as the hybrid internal embedding, contains the robot's explicit velocity and implicit stability representation.
A wealth of real-world experiments demonstrates its agility, even in high-difficulty tasks and cases never occurred during the training process.
arXiv Detail & Related papers (2023-12-18T18:59:06Z) - Surfer: Progressive Reasoning with World Models for Robotic Manipulation [51.26109827779267]
We introduce a novel and simple robot manipulation framework, called Surfer.
Surfer treats robot manipulation as a state transfer of the visual scene, and decouples it into two parts: action and scene.
It is based on the world model, treats robot manipulation as a state transfer of the visual scene, and decouples it into two parts: action and scene.
arXiv Detail & Related papers (2023-06-20T07:06:04Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Barkour: Benchmarking Animal-level Agility with Quadruped Robots [70.97471756305463]
We introduce the Barkour benchmark, an obstacle course to quantify agility for legged robots.
Inspired by dog agility competitions, it consists of diverse obstacles and a time based scoring mechanism.
We present two methods for tackling the benchmark.
arXiv Detail & Related papers (2023-05-24T02:49:43Z) - Fast Traversability Estimation for Wild Visual Navigation [17.015268056925745]
We propose Wild Visual Navigation (WVN), an online self-supervised learning system for traversability estimation.
The system is able to continuously adapt from a short human demonstration in the field.
We demonstrate the advantages of our approach with experiments and ablation studies in challenging environments in forests, parks, and grasslands.
arXiv Detail & Related papers (2023-05-15T10:19:30Z) - Neural Scene Representation for Locomotion on Structured Terrain [56.48607865960868]
We propose a learning-based method to reconstruct the local terrain for a mobile robot traversing urban environments.
Using a stream of depth measurements from the onboard cameras and the robot's trajectory, the estimates the topography in the robot's vicinity.
We propose a 3D reconstruction model that faithfully reconstructs the scene, despite the noisy measurements and large amounts of missing data coming from the blind spots of the camera arrangement.
arXiv Detail & Related papers (2022-06-16T10:45:17Z) - Coupling Vision and Proprioception for Navigation of Legged Robots [65.59559699815512]
We exploit the complementary strengths of vision and proprioception to achieve point goal navigation in a legged robot.
We show superior performance compared to wheeled robot (LoCoBot) baselines.
We also show the real-world deployment of our system on a quadruped robot with onboard sensors and compute.
arXiv Detail & Related papers (2021-12-03T18:59:59Z) - Quadruped Locomotion on Non-Rigid Terrain using Reinforcement Learning [10.729374293332281]
We present a novel reinforcement learning framework for learning locomotion on non-rigid dynamic terrains.
A trained robot with 55cm base length can walk on terrain that can sink up to 5cm.
We show the effectiveness of our method by training the robot with various terrain conditions.
arXiv Detail & Related papers (2021-07-07T00:34:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.