ViKiNG: Vision-Based Kilometer-Scale Navigation with Geographic Hints
- URL: http://arxiv.org/abs/2202.11271v1
- Date: Wed, 23 Feb 2022 02:14:23 GMT
- Title: ViKiNG: Vision-Based Kilometer-Scale Navigation with Geographic Hints
- Authors: Dhruv Shah, Sergey Levine
- Abstract summary: Long-range navigation requires both planning and reasoning about local traversability.
We propose a learning-based approach that integrates learning and planning.
ViKiNG can leverage its image-based learned controller and goal-directed to navigate to goals up to 3 kilometers away.
- Score: 94.60414567852536
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robotic navigation has been approached as a problem of 3D reconstruction and
planning, as well as an end-to-end learning problem. However, long-range
navigation requires both planning and reasoning about local traversability, as
well as being able to utilize information about global geography, in the form
of a roadmap, GPS, or other side information, which provides important
navigational hints but may be low-fidelity or unreliable. In this work, we
propose a learning-based approach that integrates learning and planning, and
can utilize side information such as schematic roadmaps, satellite maps and GPS
coordinates as a planning heuristic, without relying on them being accurate.
Our method, ViKiNG, incorporates a local traversability model, which looks at
the robot's current camera observation and a potential subgoal to infer how
easily that subgoal can be reached, as well as a heuristic model, which looks
at overhead maps and attempts to estimate the distance to the destination for
various subgoals. These models are used by a heuristic planner to decide the
best next subgoal in order to reach the final destination. Our method performs
no explicit geometric reconstruction, utilizing only a topological
representation of the environment. Despite having never seen trajectories
longer than 80 meters in its training dataset, ViKiNG can leverage its
image-based learned controller and goal-directed heuristic to navigate to goals
up to 3 kilometers away in previously unseen environments, and exhibit complex
behaviors such as probing potential paths and doubling back when they are found
to be non-viable. ViKiNG is also robust to unreliable maps and GPS, since the
low-level controller ultimately makes decisions based on egocentric image
observations, using maps only as planning heuristics. For videos of our
experiments, please check out https://sites.google.com/view/viking-release.
Related papers
- Pixel to Elevation: Learning to Predict Elevation Maps at Long Range using Images for Autonomous Offroad Navigation [10.898724668444125]
We present a learning-based approach capable of predicting terrain elevation maps at long-range using only onboard egocentric images in real-time.
We experimentally validate the applicability of our proposed approach for autonomous offroad robotic navigation in complex and unstructured terrain.
arXiv Detail & Related papers (2024-01-30T22:37:24Z) - Object Goal Navigation with Recursive Implicit Maps [92.6347010295396]
We propose an implicit spatial map for object goal navigation.
Our method significantly outperforms the state of the art on the challenging MP3D dataset.
We deploy our model on a real robot and achieve encouraging object goal navigation results in real scenes.
arXiv Detail & Related papers (2023-08-10T14:21:33Z) - Predicting Topological Maps for Visual Navigation in Unexplored
Environments [28.30219170556201]
We propose a robotic learning system for autonomous exploration and navigation in unexplored environments.
The core of our method is a process for building, predicting, and using probabilistic layout graphs for assisting goal-based visual navigation.
We test our framework in Matterport3D and show more success and efficient navigation in unseen environments.
arXiv Detail & Related papers (2022-11-23T00:53:11Z) - Learning Forward Dynamics Model and Informed Trajectory Sampler for Safe
Quadruped Navigation [1.2783783498844021]
A typical SOTA system is composed of four main modules -- mapper, global planner, local planner, and command-tracking controller.
We build a robust and safe local planner which is designed to generate a velocity plan to track a coarsely planned path from the global planner.
Using our framework, a quadruped robot can autonomously navigate in various complex environments without a collision and generate a smoother command plan compared to the baseline method.
arXiv Detail & Related papers (2022-04-19T04:01:44Z) - Lifelong Topological Visual Navigation [16.41858724205884]
We propose a learning-based visual navigation method with graph update strategies that improve lifelong navigation performance over time.
We take inspiration from sampling-based planning algorithms to build image-based topological graphs, resulting in sparser graphs yet with higher navigation performance compared to baseline methods.
Unlike controllers that learn from fixed training environments, we show that our model can be finetuned using a relatively small dataset from the real-world environment where the robot is deployed.
arXiv Detail & Related papers (2021-10-16T06:16:14Z) - Rapid Exploration for Open-World Navigation with Latent Goal Models [78.45339342966196]
We describe a robotic learning system for autonomous exploration and navigation in diverse, open-world environments.
At the core of our method is a learned latent variable model of distances and actions, along with a non-parametric topological memory of images.
We use an information bottleneck to regularize the learned policy, giving us (i) a compact visual representation of goals, (ii) improved generalization capabilities, and (iii) a mechanism for sampling feasible goals for exploration.
arXiv Detail & Related papers (2021-04-12T23:14:41Z) - SOON: Scenario Oriented Object Navigation with Graph-based Exploration [102.74649829684617]
The ability to navigate like a human towards a language-guided target from anywhere in a 3D embodied environment is one of the 'holy grail' goals of intelligent robots.
Most visual navigation benchmarks focus on navigating toward a target from a fixed starting point, guided by an elaborate set of instructions that depicts step-by-step.
This approach deviates from real-world problems in which human-only describes what the object and its surrounding look like and asks the robot to start navigation from anywhere.
arXiv Detail & Related papers (2021-03-31T15:01:04Z) - ViNG: Learning Open-World Navigation with Visual Goals [82.84193221280216]
We propose a learning-based navigation system for reaching visually indicated goals.
We show that our system, which we call ViNG, outperforms previously-proposed methods for goal-conditioned reinforcement learning.
We demonstrate ViNG on a number of real-world applications, such as last-mile delivery and warehouse inspection.
arXiv Detail & Related papers (2020-12-17T18:22:32Z) - BADGR: An Autonomous Self-Supervised Learning-Based Navigation System [158.6392333480079]
BadGR is an end-to-end learning-based mobile robot navigation system.
It can be trained with self-supervised off-policy data gathered in real-world environments.
BadGR can navigate in real-world urban and off-road environments with geometrically distracting obstacles.
arXiv Detail & Related papers (2020-02-13T18:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.