Right Place, Right Time! Generalizing ObjectNav to Dynamic Environments with Portable Targets
- URL: http://arxiv.org/abs/2403.09905v2
- Date: Sun, 01 Dec 2024 21:42:37 GMT
- Title: Right Place, Right Time! Generalizing ObjectNav to Dynamic Environments with Portable Targets
- Authors: Vishnu Sashank Dorbala, Bhrij Patel, Amrit Singh Bedi, Dinesh Manocha,
- Abstract summary: We present a novel formulation to generalize ObjectNav to dynamic environments with non-stationary objects.
We first address several challenging issues with dynamizing existing topological scene graphs.
We then present a benchmark for P-ObjectNav using a combination of reinforcement learning, and Large Language Model (LLM)-based navigation approaches.
- Score: 55.581423861790945
- License:
- Abstract: ObjectNav is a popular task in Embodied AI, where an agent navigates to a target object in an unseen environment. Prior literature makes the assumption of a static environment with stationary objects, which lacks realism. To address this, we present a novel formulation to generalize ObjectNav to dynamic environments with non-stationary objects, and refer to it as Portable ObjectNav or P-ObjectNav. In our formulation, we first address several challenging issues with dynamizing existing topological scene graphs by developing a novel method that introduces multiple transition behaviors to portable objects in the scene. We use this technique to dynamize Matterport3D, a popular simulator for evaluating embodied tasks. We then present a benchmark for P-ObjectNav using a combination of heuristic, reinforcement learning, and Large Language Model (LLM)-based navigation approaches on the dynamized environment, while introducing novel evaluation metrics tailored for our task. Our work fundamentally challenges the "static-environment" notion of prior ObjectNav work; the code and dataset for P-ObjectNav will be made publicly available to foster research on embodied navigation in dynamic scenes. We provide an anonymized repository for our code and dataset: https://anonymous.4open.science/r/PObjectNav-1C6D.
Related papers
- Personalized Instance-based Navigation Toward User-Specific Objects in Realistic Environments [44.6372390798904]
We propose a new task denominated Personalized Instance-based Navigation (PIN), in which an embodied agent is tasked with locating and reaching a specific personal object.
In each episode, the target object is presented to the agent using two modalities: a set of visual reference images on a neutral background and manually annotated textual descriptions.
arXiv Detail & Related papers (2024-10-23T18:01:09Z) - SayNav: Grounding Large Language Models for Dynamic Planning to Navigation in New Environments [14.179677726976056]
SayNav is a new approach that leverages human knowledge from Large Language Models (LLMs) for efficient generalization to complex navigation tasks.
SayNav achieves state-of-the-art results and even outperforms an oracle based baseline with strong ground-truth assumptions by more than 8% in terms of success rate.
arXiv Detail & Related papers (2023-09-08T02:24:37Z) - Object Goal Navigation with Recursive Implicit Maps [92.6347010295396]
We propose an implicit spatial map for object goal navigation.
Our method significantly outperforms the state of the art on the challenging MP3D dataset.
We deploy our model on a real robot and achieve encouraging object goal navigation results in real scenes.
arXiv Detail & Related papers (2023-08-10T14:21:33Z) - A Contextual Bandit Approach for Learning to Plan in Environments with
Probabilistic Goal Configurations [20.15854546504947]
We propose a modular framework for object-nav that is able to efficiently search indoor environments for not just static objects but also movable objects.
Our contextual-bandit agent efficiently explores the environment by showing optimism in the face of uncertainty.
We evaluate our algorithms in two simulated environments and a real-world setting, to demonstrate high sample efficiency and reliability.
arXiv Detail & Related papers (2022-11-29T15:48:54Z) - Object Memory Transformer for Object Goal Navigation [10.359616364592075]
This paper presents a reinforcement learning method for object goal navigation (Nav)
An agent navigates in 3D indoor environments to reach a target object based on long-term observations of objects and scenes.
To the best of our knowledge, this is the first work that uses a long-term memory of object semantics in a goal-oriented navigation task.
arXiv Detail & Related papers (2022-03-24T09:16:56Z) - SOON: Scenario Oriented Object Navigation with Graph-based Exploration [102.74649829684617]
The ability to navigate like a human towards a language-guided target from anywhere in a 3D embodied environment is one of the 'holy grail' goals of intelligent robots.
Most visual navigation benchmarks focus on navigating toward a target from a fixed starting point, guided by an elaborate set of instructions that depicts step-by-step.
This approach deviates from real-world problems in which human-only describes what the object and its surrounding look like and asks the robot to start navigation from anywhere.
arXiv Detail & Related papers (2021-03-31T15:01:04Z) - ArraMon: A Joint Navigation-Assembly Instruction Interpretation Task in
Dynamic Environments [85.81157224163876]
We combine Vision-and-Language Navigation, assembling of collected objects, and object referring expression comprehension, to create a novel joint navigation-and-assembly task, named ArraMon.
During this task, the agent is asked to find and collect different target objects one-by-one by navigating based on natural language instructions in a complex, realistic outdoor environment.
We present results for several baseline models (integrated and biased) and metrics (nDTW, CTC, rPOD, and PTC), and the large model-human performance gap demonstrates that our task is challenging and presents a wide scope for future work.
arXiv Detail & Related papers (2020-11-15T23:30:36Z) - Object Goal Navigation using Goal-Oriented Semantic Exploration [98.14078233526476]
This work studies the problem of object goal navigation which involves navigating to an instance of the given object category in unseen environments.
We propose a modular system called, Goal-Oriented Semantic Exploration' which builds an episodic semantic map and uses it to explore the environment efficiently.
arXiv Detail & Related papers (2020-07-01T17:52:32Z) - ObjectNav Revisited: On Evaluation of Embodied Agents Navigating to
Objects [119.46959413000594]
This document summarizes the consensus recommendations of a working group on ObjectNav.
We make recommendations on subtle but important details of evaluation criteria.
We provide a detailed description of the instantiation of these recommendations in challenges organized at the Embodied AI workshop at CVPR 2020.
arXiv Detail & Related papers (2020-06-23T17:18:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.