PlaceNav: Topological Navigation through Place Recognition
- URL: http://arxiv.org/abs/2309.17260v4
- Date: Thu, 29 Feb 2024 12:09:33 GMT
- Title: PlaceNav: Topological Navigation through Place Recognition
- Authors: Lauri Suomela, Jussi Kalliola, Harry Edelman, Joni-Kristian
K\"am\"ar\"ainen
- Abstract summary: We present PlaceNav, subdividing the robot-independent part into navigation-specific and generic computer vision components.
We utilize visual place recognition for the subgoal selection of the topological navigation pipeline.
Our experimental results verify the design and the new method obtains a 76% higher success rate in indoor and 23% higher in outdoor navigation tasks with higher computational efficiency.
- Score: 1.9382079036818822
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent results suggest that splitting topological navigation into
robot-independent and robot-specific components improves navigation performance
by enabling the robot-independent part to be trained with data collected by
robots of different types. However, the navigation methods' performance is
still limited by the scarcity of suitable training data and they suffer from
poor computational scaling.
In this work, we present PlaceNav, subdividing the robot-independent part
into navigation-specific and generic computer vision components. We utilize
visual place recognition for the subgoal selection of the topological
navigation pipeline. This makes subgoal selection more efficient and enables
leveraging large-scale datasets from non-robotics sources, increasing training
data availability. Bayesian filtering, enabled by place recognition, further
improves navigation performance by increasing the temporal consistency of
subgoals. Our experimental results verify the design and the new method obtains
a 76% higher success rate in indoor and 23% higher in outdoor navigation tasks
with higher computational efficiency.
Related papers
- Hierarchical end-to-end autonomous navigation through few-shot waypoint detection [0.0]
Human navigation is facilitated through the association of actions with landmarks.
Current autonomous navigation schemes rely on accurate positioning devices and algorithms as well as extensive streams of sensory data collected from the environment.
We propose a hierarchical end-to-end meta-learning scheme that enables a mobile robot to navigate in a previously unknown environment.
arXiv Detail & Related papers (2024-09-23T00:03:39Z) - Two-Stage Depth Enhanced Learning with Obstacle Map For Object Navigation [11.667940255053582]
This paper uses the RGB and depth information of the training scene to pretrain the feature extractor, which improves navigation efficiency.
We evaluated our method on AI2-Thor and RoboTHOR and demonstrated that it significantly outperforms state-of-the-art (SOTA) methods on success rate and navigation efficiency.
arXiv Detail & Related papers (2024-06-20T08:35:10Z) - Learning Navigational Visual Representations with Semantic Map
Supervision [85.91625020847358]
We propose a navigational-specific visual representation learning method by contrasting the agent's egocentric views and semantic maps.
Ego$2$-Map learning transfers the compact and rich information from a map, such as objects, structure and transition, to the agent's egocentric representations for navigation.
arXiv Detail & Related papers (2023-07-23T14:01:05Z) - Real-time Vision-based Navigation for a Robot in an Indoor Environment [0.0]
The system utilizes vision-based techniques and advanced path-planning algorithms to enable the robot to navigate toward the destination while avoiding obstacles.
The findings contribute to the advancement of indoor robot navigation, showcasing the potential of vision-based techniques for real-time, autonomous navigation.
arXiv Detail & Related papers (2023-07-02T21:01:56Z) - Enhancing Navigation Benchmarking and Perception Data Generation for
Row-based Crops in Simulation [0.3518016233072556]
This paper presents a synthetic dataset to train semantic segmentation networks and a collection of virtual scenarios for a fast evaluation of navigation algorithms.
An automatic parametric approach is developed to explore different field geometries and features.
The simulation framework and the dataset have been evaluated by training a deep segmentation network on different crops and benchmarking the resulting navigation.
arXiv Detail & Related papers (2023-06-27T14:46:09Z) - Offline Reinforcement Learning for Visual Navigation [66.88830049694457]
ReViND is the first offline RL system for robotic navigation that can leverage previously collected data to optimize user-specified reward functions in the real-world.
We show that ReViND can navigate to distant goals using only offline training from this dataset, and exhibit behaviors that qualitatively differ based on the user-specified reward function.
arXiv Detail & Related papers (2022-12-16T02:23:50Z) - GNM: A General Navigation Model to Drive Any Robot [67.40225397212717]
General goal-conditioned model for vision-based navigation can be trained on data obtained from many distinct but structurally similar robots.
We analyze the necessary design decisions for effective data sharing across robots.
We deploy the trained GNM on a range of new robots, including an under quadrotor.
arXiv Detail & Related papers (2022-10-07T07:26:41Z) - Occupancy Anticipation for Efficient Exploration and Navigation [97.17517060585875]
We propose occupancy anticipation, where the agent uses its egocentric RGB-D observations to infer the occupancy state beyond the visible regions.
By exploiting context in both the egocentric views and top-down maps our model successfully anticipates a broader map of the environment.
Our approach is the winning entry in the 2020 Habitat PointNav Challenge.
arXiv Detail & Related papers (2020-08-21T03:16:51Z) - APPLD: Adaptive Planner Parameter Learning from Demonstration [48.63930323392909]
We introduce APPLD, Adaptive Planner Learning from Demonstration, that allows existing navigation systems to be successfully applied to new complex environments.
APPLD is verified on two robots running different navigation systems in different environments.
Experimental results show that APPLD can outperform navigation systems with the default and expert-tuned parameters, and even the human demonstrator themselves.
arXiv Detail & Related papers (2020-03-31T21:15:16Z) - Deep Learning based Pedestrian Inertial Navigation: Methods, Dataset and
On-Device Inference [49.88536971774444]
Inertial measurements units (IMUs) are small, cheap, energy efficient, and widely employed in smart devices and mobile robots.
Exploiting inertial data for accurate and reliable pedestrian navigation supports is a key component for emerging Internet-of-Things applications and services.
We present and release the Oxford Inertial Odometry dataset (OxIOD), a first-of-its-kind public dataset for deep learning based inertial navigation research.
arXiv Detail & Related papers (2020-01-13T04:41:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.