Lifelong Topological Visual Navigation
- URL: http://arxiv.org/abs/2110.08488v1
- Date: Sat, 16 Oct 2021 06:16:14 GMT
- Title: Lifelong Topological Visual Navigation
- Authors: Rey Reza Wiyatno, Anqi Xu, and Liam Paull
- Abstract summary: We propose a learning-based visual navigation method with graph update strategies that improve lifelong navigation performance over time.
We take inspiration from sampling-based planning algorithms to build image-based topological graphs, resulting in sparser graphs yet with higher navigation performance compared to baseline methods.
Unlike controllers that learn from fixed training environments, we show that our model can be finetuned using a relatively small dataset from the real-world environment where the robot is deployed.
- Score: 16.41858724205884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability for a robot to navigate with only the use of vision is appealing
due to its simplicity. Traditional vision-based navigation approaches required
a prior map-building step that was arduous and prone to failure, or could only
exactly follow previously executed trajectories. Newer learning-based visual
navigation techniques reduce the reliance on a map and instead directly learn
policies from image inputs for navigation. There are currently two prevalent
paradigms: end-to-end approaches forego the explicit map representation
entirely, and topological approaches which still preserve some loose
connectivity of the space. However, while end-to-end methods tend to struggle
in long-distance navigation tasks, topological map-based solutions are prone to
failure due to spurious edges in the graph. In this work, we propose a
learning-based topological visual navigation method with graph update
strategies that improve lifelong navigation performance over time. We take
inspiration from sampling-based planning algorithms to build image-based
topological graphs, resulting in sparser graphs yet with higher navigation
performance compared to baseline methods. Also, unlike controllers that learn
from fixed training environments, we show that our model can be finetuned using
a relatively small dataset from the real-world environment where the robot is
deployed. We further assess performance of our system in real-world
deployments.
Related papers
- NavTopo: Leveraging Topological Maps For Autonomous Navigation Of a Mobile Robot [1.0550841723235613]
We propose a full navigation pipeline based on topological map and two-level path planning.
The pipeline localizes in the graph by matching neural network descriptors and 2D projections of the input point clouds.
We test our approach in a large indoor photo-relaistic simulated environment and compare it to a metric map-based approach based on popular metric mapping method RTAB-MAP.
arXiv Detail & Related papers (2024-10-15T10:54:49Z) - PRET: Planning with Directed Fidelity Trajectory for Vision and Language Navigation [30.710806048991923]
Vision and language navigation is a task that requires an agent to navigate according to a natural language instruction.
Recent methods predict sub-goals on constructed topology map at each step to enable long-term action planning.
We propose an alternative method that facilitates navigation planning by considering the alignment between instructions and directed fidelity trajectories.
arXiv Detail & Related papers (2024-07-16T08:22:18Z) - Interactive Semantic Map Representation for Skill-based Visual Object
Navigation [43.71312386938849]
This paper introduces a new representation of a scene semantic map formed during the embodied agent interaction with the indoor environment.
We have implemented this representation into a full-fledged navigation approach called SkillTron.
The proposed approach makes it possible to form both intermediate goals for robot exploration and the final goal for object navigation.
arXiv Detail & Related papers (2023-11-07T16:30:12Z) - Object Goal Navigation with Recursive Implicit Maps [92.6347010295396]
We propose an implicit spatial map for object goal navigation.
Our method significantly outperforms the state of the art on the challenging MP3D dataset.
We deploy our model on a real robot and achieve encouraging object goal navigation results in real scenes.
arXiv Detail & Related papers (2023-08-10T14:21:33Z) - Learning Navigational Visual Representations with Semantic Map
Supervision [85.91625020847358]
We propose a navigational-specific visual representation learning method by contrasting the agent's egocentric views and semantic maps.
Ego$2$-Map learning transfers the compact and rich information from a map, such as objects, structure and transition, to the agent's egocentric representations for navigation.
arXiv Detail & Related papers (2023-07-23T14:01:05Z) - How To Not Train Your Dragon: Training-free Embodied Object Goal
Navigation with Semantic Frontiers [94.46825166907831]
We present a training-free solution to tackle the object goal navigation problem in Embodied AI.
Our method builds a structured scene representation based on the classic visual simultaneous localization and mapping (V-SLAM) framework.
Our method propagates semantics on the scene graphs based on language priors and scene statistics to introduce semantic knowledge to the geometric frontiers.
arXiv Detail & Related papers (2023-05-26T13:38:33Z) - Find a Way Forward: a Language-Guided Semantic Map Navigator [53.69229615952205]
This paper attacks the problem of language-guided navigation in a new perspective.
We use novel semantic navigation maps, which enables robots to carry out natural language instructions and move to a target position based on the map observations.
The proposed approach has noticeable performance gains, especially in long-distance navigation cases.
arXiv Detail & Related papers (2022-03-07T07:40:33Z) - ViKiNG: Vision-Based Kilometer-Scale Navigation with Geographic Hints [94.60414567852536]
Long-range navigation requires both planning and reasoning about local traversability.
We propose a learning-based approach that integrates learning and planning.
ViKiNG can leverage its image-based learned controller and goal-directed to navigate to goals up to 3 kilometers away.
arXiv Detail & Related papers (2022-02-23T02:14:23Z) - Complex Terrain Navigation via Model Error Prediction [5.937673383513695]
We train with an on-policy approach, resulting in successful navigation policies using as little as 50 minutes of training data split across simulation and real world.
Our learning-based navigation system is a sample efficient short-term planner that we demonstrate on a Clearpath Husky navigating through a variety of terrain.
arXiv Detail & Related papers (2021-11-18T15:55:04Z) - ViNG: Learning Open-World Navigation with Visual Goals [82.84193221280216]
We propose a learning-based navigation system for reaching visually indicated goals.
We show that our system, which we call ViNG, outperforms previously-proposed methods for goal-conditioned reinforcement learning.
We demonstrate ViNG on a number of real-world applications, such as last-mile delivery and warehouse inspection.
arXiv Detail & Related papers (2020-12-17T18:22:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.