VoroNav: Voronoi-based Zero-shot Object Navigation with Large Language
Model
- URL: http://arxiv.org/abs/2401.02695v2
- Date: Tue, 6 Feb 2024 05:15:20 GMT
- Title: VoroNav: Voronoi-based Zero-shot Object Navigation with Large Language
Model
- Authors: Pengying Wu, Yao Mu, Bingxian Wu, Yi Hou, Ji Ma, Shanghang Zhang,
Chang Liu
- Abstract summary: VoroNav is a semantic exploration framework to extract exploratory paths and planning nodes from a semantic map constructed in real time.
By harnessing topological and semantic information, VoroNav designs text-based descriptions of paths and images that are readily interpretable by a large language model.
- Score: 28.79971953667143
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the realm of household robotics, the Zero-Shot Object Navigation (ZSON)
task empowers agents to adeptly traverse unfamiliar environments and locate
objects from novel categories without prior explicit training. This paper
introduces VoroNav, a novel semantic exploration framework that proposes the
Reduced Voronoi Graph to extract exploratory paths and planning nodes from a
semantic map constructed in real time. By harnessing topological and semantic
information, VoroNav designs text-based descriptions of paths and images that
are readily interpretable by a large language model (LLM). In particular, our
approach presents a synergy of path and farsight descriptions to represent the
environmental context, enabling LLM to apply commonsense reasoning to ascertain
waypoints for navigation. Extensive evaluation on HM3D and HSSD validates
VoroNav surpasses existing benchmarks in both success rate and exploration
efficiency (absolute improvement: +2.8% Success and +3.7% SPL on HM3D, +2.6%
Success and +3.8% SPL on HSSD). Additionally introduced metrics that evaluate
obstacle avoidance proficiency and perceptual efficiency further corroborate
the enhancements achieved by our method in ZSON planning. Project page:
https://voro-nav.github.io
Related papers
- Affordances-Oriented Planning using Foundation Models for Continuous Vision-Language Navigation [62.76017573929462]
LLM-based agents have demonstrated impressive zero-shot performance in the vision-language navigation (VLN) task.
We propose AO-Planner, a novel affordances-oriented planning framework for continuous VLN task.
Our method establishes an effective connection between LLM and 3D world to circumvent the difficulty of directly predicting world coordinates.
arXiv Detail & Related papers (2024-07-08T12:52:46Z) - GaussNav: Gaussian Splatting for Visual Navigation [92.13664084464514]
Instance ImageGoal Navigation (IIN) requires an agent to locate a specific object depicted in a goal image within an unexplored environment.
Our framework constructs a novel map representation based on 3D Gaussian Splatting (3DGS)
Our framework demonstrates a significant leap in performance, evidenced by an increase in Success weighted by Path Length (SPL) from 0.252 to 0.578 on the challenging Habitat-Matterport 3D (HM3D) dataset.
arXiv Detail & Related papers (2024-03-18T09:56:48Z) - SayNav: Grounding Large Language Models for Dynamic Planning to Navigation in New Environments [14.179677726976056]
SayNav is a new approach that leverages human knowledge from Large Language Models (LLMs) for efficient generalization to complex navigation tasks.
SayNav achieves state-of-the-art results and even outperforms an oracle based baseline with strong ground-truth assumptions by more than 8% in terms of success rate.
arXiv Detail & Related papers (2023-09-08T02:24:37Z) - Object Goal Navigation with Recursive Implicit Maps [92.6347010295396]
We propose an implicit spatial map for object goal navigation.
Our method significantly outperforms the state of the art on the challenging MP3D dataset.
We deploy our model on a real robot and achieve encouraging object goal navigation results in real scenes.
arXiv Detail & Related papers (2023-08-10T14:21:33Z) - Can an Embodied Agent Find Your "Cat-shaped Mug"? LLM-Guided Exploration
for Zero-Shot Object Navigation [58.3480730643517]
We present LGX, a novel algorithm for Language-Driven Zero-Shot Object Goal Navigation (L-ZSON)
Our approach makes use of Large Language Models (LLMs) for this task.
We achieve state-of-the-art zero-shot object navigation results on RoboTHOR with a success rate (SR) improvement of over 27% over the current baseline.
arXiv Detail & Related papers (2023-03-06T20:19:19Z) - ESC: Exploration with Soft Commonsense Constraints for Zero-shot Object
Navigation [75.13546386761153]
We present a novel zero-shot object navigation method, Exploration with Soft Commonsense constraints (ESC)
ESC transfers commonsense knowledge in pre-trained models to open-world object navigation without any navigation experience.
Experiments on MP3D, HM3D, and RoboTHOR benchmarks show that our ESC method improves significantly over baselines.
arXiv Detail & Related papers (2023-01-30T18:37:32Z) - PEANUT: Predicting and Navigating to Unseen Targets [18.87376347895365]
Efficient ObjectGoal navigation (ObjectNav) in novel environments requires an understanding of the spatial and semantic regularities in environment layouts.
We present a method for learning these regularities by predicting the locations of unobserved objects from incomplete semantic maps.
Our prediction model is lightweight and can be trained in a supervised manner using a relatively small amount of passively collected data.
arXiv Detail & Related papers (2022-12-05T18:58:58Z) - SOON: Scenario Oriented Object Navigation with Graph-based Exploration [102.74649829684617]
The ability to navigate like a human towards a language-guided target from anywhere in a 3D embodied environment is one of the 'holy grail' goals of intelligent robots.
Most visual navigation benchmarks focus on navigating toward a target from a fixed starting point, guided by an elaborate set of instructions that depicts step-by-step.
This approach deviates from real-world problems in which human-only describes what the object and its surrounding look like and asks the robot to start navigation from anywhere.
arXiv Detail & Related papers (2021-03-31T15:01:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.