Coupling Vision and Proprioception for Navigation of Legged Robots
- URL: http://arxiv.org/abs/2112.02094v1
- Date: Fri, 3 Dec 2021 18:59:59 GMT
- Title: Coupling Vision and Proprioception for Navigation of Legged Robots
- Authors: Zipeng Fu, Ashish Kumar, Ananye Agarwal, Haozhi Qi, Jitendra Malik,
Deepak Pathak
- Abstract summary: We exploit the complementary strengths of vision and proprioception to achieve point goal navigation in a legged robot.
We show superior performance compared to wheeled robot (LoCoBot) baselines.
We also show the real-world deployment of our system on a quadruped robot with onboard sensors and compute.
- Score: 65.59559699815512
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We exploit the complementary strengths of vision and proprioception to
achieve point goal navigation in a legged robot. Legged systems are capable of
traversing more complex terrain than wheeled robots, but to fully exploit this
capability, we need the high-level path planner in the navigation system to be
aware of the walking capabilities of the low-level locomotion policy on varying
terrains. We achieve this by using proprioceptive feedback to estimate the safe
operating limits of the walking policy, and to sense unexpected obstacles and
terrain properties like smoothness or softness of the ground that may be missed
by vision. The navigation system uses onboard cameras to generate an occupancy
map and a corresponding cost map to reach the goal. The FMM (Fast Marching
Method) planner then generates a target path. The velocity command generator
takes this as input to generate the desired velocity for the locomotion policy
using as input additional constraints, from the safety advisor, of unexpected
obstacles and terrain determined speed limits. We show superior performance
compared to wheeled robot (LoCoBot) baselines, and other baselines which have
disjoint high-level planning and low-level control. We also show the real-world
deployment of our system on a quadruped robot with onboard sensors and compute.
Videos at https://navigation-locomotion.github.io/camera-ready
Related papers
- Hyp2Nav: Hyperbolic Planning and Curiosity for Crowd Navigation [58.574464340559466]
We advocate for hyperbolic learning to enable crowd navigation and we introduce Hyp2Nav.
Hyp2Nav leverages the intrinsic properties of hyperbolic geometry to better encode the hierarchical nature of decision-making processes in navigation tasks.
We propose a hyperbolic policy model and a hyperbolic curiosity module that results in effective social navigation, best success rates, and returns across multiple simulation settings.
arXiv Detail & Related papers (2024-07-18T14:40:33Z) - Learning Robust Autonomous Navigation and Locomotion for Wheeled-Legged Robots [50.02055068660255]
Navigating urban environments poses unique challenges for robots, necessitating innovative solutions for locomotion and navigation.
This work introduces a fully integrated system comprising adaptive locomotion control, mobility-aware local navigation planning, and large-scale path planning within the city.
Using model-free reinforcement learning (RL) techniques and privileged learning, we develop a versatile locomotion controller.
Our controllers are integrated into a large-scale urban navigation system and validated by autonomous, kilometer-scale navigation missions conducted in Zurich, Switzerland, and Seville, Spain.
arXiv Detail & Related papers (2024-05-03T00:29:20Z) - Legged Locomotion in Challenging Terrains using Egocentric Vision [70.37554680771322]
We present the first end-to-end locomotion system capable of traversing stairs, curbs, stepping stones, and gaps.
We show this result on a medium-sized quadruped robot using a single front-facing depth camera.
arXiv Detail & Related papers (2022-11-14T18:59:58Z) - ViNL: Visual Navigation and Locomotion Over Obstacles [36.46953494419389]
We present Visual Navigation and Locomotion over obstacles (ViNL)
It enables a quadrupedal robot to navigate unseen apartments while stepping over small obstacles that lie in its path.
ViNL consists of: (1) a visual navigation policy that outputs linear and angular velocity commands that guides the robot to a goal coordinate in unfamiliar indoor environments; and (2) a visual locomotion policy that controls the robot's joints to avoid stepping on obstacles while following provided velocity commands.
arXiv Detail & Related papers (2022-10-26T15:38:28Z) - Advanced Skills by Learning Locomotion and Local Navigation End-to-End [10.872193480485596]
In this work, we propose to solve the complete problem by training an end-to-end policy with deep reinforcement learning.
We demonstrate the successful deployment of policies on a real quadrupedal robot.
arXiv Detail & Related papers (2022-09-26T16:35:00Z) - Learning Semantics-Aware Locomotion Skills from Human Demonstration [35.996425893483796]
We present a framework that learns semantics-aware locomotion skills from perception for quadrupedal robots.
Our framework learns to adjust the speed and gait of the robot based on perceived terrain semantics, and enables the robot to walk over 6km without failure.
arXiv Detail & Related papers (2022-06-27T21:08:03Z) - ViKiNG: Vision-Based Kilometer-Scale Navigation with Geographic Hints [94.60414567852536]
Long-range navigation requires both planning and reasoning about local traversability.
We propose a learning-based approach that integrates learning and planning.
ViKiNG can leverage its image-based learned controller and goal-directed to navigate to goals up to 3 kilometers away.
arXiv Detail & Related papers (2022-02-23T02:14:23Z) - Augmented reality navigation system for visual prosthesis [67.09251544230744]
We propose an augmented reality navigation system for visual prosthesis that incorporates a software of reactive navigation and path planning.
It consists on four steps: locating the subject on a map, planning the subject trajectory, showing it to the subject and re-planning without obstacles.
Results show how our augmented navigation system help navigation performance by reducing the time and distance to reach the goals, even significantly reducing the number of obstacles collisions.
arXiv Detail & Related papers (2021-09-30T09:41:40Z) - Autonomous Navigation of Underactuated Bipedal Robots in
Height-Constrained Environments [20.246040671823554]
This paper presents an end-to-end autonomous navigation framework for bipedal robots.
A vertically-actuated Spring-Loaded Inverted Pendulum (vSLIP) model is introduced to capture the robot's coupled dynamics of planar walking and vertical walking height.
A variable walking height controller is leveraged to enable the bipedal robot to maintain stable periodic walking gaits while following the planned trajectory.
arXiv Detail & Related papers (2021-09-13T05:36:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.