Uncertainty-driven Planner for Exploration and Navigation
- URL: http://arxiv.org/abs/2202.11907v1
- Date: Thu, 24 Feb 2022 05:25:31 GMT
- Title: Uncertainty-driven Planner for Exploration and Navigation
- Authors: Georgios Georgakis, Bernadette Bucher, Anton Arapin, Karl
Schmeckpeper, Nikolai Matni, Kostas Daniilidis
- Abstract summary: We consider the problems of exploration and point-goal navigation in previously unseen environments.
We argue that learning occupancy priors over indoor maps provides significant advantages towards addressing these problems.
We present a novel planning framework that first learns to generate occupancy maps beyond the field-of-view of the agent.
- Score: 36.933903274373336
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problems of exploration and point-goal navigation in
previously unseen environments, where the spatial complexity of indoor scenes
and partial observability constitute these tasks challenging. We argue that
learning occupancy priors over indoor maps provides significant advantages
towards addressing these problems. To this end, we present a novel planning
framework that first learns to generate occupancy maps beyond the field-of-view
of the agent, and second leverages the model uncertainty over the generated
areas to formulate path selection policies for each task of interest. For
point-goal navigation the policy chooses paths with an upper confidence bound
policy for efficient and traversable paths, while for exploration the policy
maximizes model uncertainty over candidate paths. We perform experiments in the
visually realistic environments of Matterport3D using the Habitat simulator and
demonstrate: 1) Improved results on exploration and map quality metrics over
competitive methods, and 2) The effectiveness of our planning module when
paired with the state-of-the-art DD-PPO method for the point-goal navigation
task.
Related papers
- Path Planning based on 2D Object Bounding-box [8.082514573754954]
We present a path planning method that utilizes 2D bounding boxes of objects, developed through imitation learning in urban driving scenarios.
This is achieved by integrating high-definition (HD) map data with images captured by surrounding cameras.
We evaluate our model on the nuPlan planning task and observed that it performs competitively in comparison to existing vision-centric methods.
arXiv Detail & Related papers (2024-02-22T19:34:56Z) - FIT-SLAM -- Fisher Information and Traversability estimation-based
Active SLAM for exploration in 3D environments [1.4474137122906163]
Active visual SLAM finds a wide array of applications in-Denied sub-terrain environments and outdoor environments for ground robots.
It is imperative to incorporate the perception considerations in the goal selection and path planning towards the goal during an exploration mission.
We propose FIT-SLAM, a new exploration method tailored for unmanned ground vehicles (UGVs) to explore 3D environments.
arXiv Detail & Related papers (2024-01-17T16:46:38Z) - ETPNav: Evolving Topological Planning for Vision-Language Navigation in
Continuous Environments [56.194988818341976]
Vision-language navigation is a task that requires an agent to follow instructions to navigate in environments.
We propose ETPNav, which focuses on two critical skills: 1) the capability to abstract environments and generate long-range navigation plans, and 2) the ability of obstacle-avoiding control in continuous environments.
ETPNav yields more than 10% and 20% improvements over prior state-of-the-art on R2R-CE and RxR-CE datasets.
arXiv Detail & Related papers (2023-04-06T13:07:17Z) - Long-HOT: A Modular Hierarchical Approach for Long-Horizon Object
Transport [83.06265788137443]
We address key challenges in long-horizon embodied exploration and navigation by proposing a new object transport task and a novel modular framework for temporally extended navigation.
Our first contribution is the design of a novel Long-HOT environment focused on deep exploration and long-horizon planning.
We propose a modular hierarchical transport policy (HTP) that builds a topological graph of the scene to perform exploration with the help of weighted frontiers.
arXiv Detail & Related papers (2022-10-28T05:30:49Z) - Landmark Policy Optimization for Object Navigation Task [77.34726150561087]
This work studies object goal navigation task, which involves navigating to the closest object related to the given semantic category in unseen environments.
Recent works have shown significant achievements both in the end-to-end Reinforcement Learning approach and modular systems, but need a big step forward to be robust and optimal.
We propose a hierarchical method that incorporates standard task formulation and additional area knowledge as landmarks, with a way to extract these landmarks.
arXiv Detail & Related papers (2021-09-17T12:28:46Z) - Deep Reinforcement Learning for Adaptive Exploration of Unknown
Environments [6.90777229452271]
We develop an adaptive exploration approach to trade off between exploration and exploitation in one single step for UAVs.
The proposed approach uses a map segmentation technique to decompose the environment map into smaller, tractable maps.
The results demonstrate that our proposed approach is capable of navigating through randomly generated environments and covering more AoI in less time steps compared to the baselines.
arXiv Detail & Related papers (2021-05-04T16:29:44Z) - Occupancy Anticipation for Efficient Exploration and Navigation [97.17517060585875]
We propose occupancy anticipation, where the agent uses its egocentric RGB-D observations to infer the occupancy state beyond the visible regions.
By exploiting context in both the egocentric views and top-down maps our model successfully anticipates a broader map of the environment.
Our approach is the winning entry in the 2020 Habitat PointNav Challenge.
arXiv Detail & Related papers (2020-08-21T03:16:51Z) - Object Goal Navigation using Goal-Oriented Semantic Exploration [98.14078233526476]
This work studies the problem of object goal navigation which involves navigating to an instance of the given object category in unseen environments.
We propose a modular system called, Goal-Oriented Semantic Exploration' which builds an episodic semantic map and uses it to explore the environment efficiently.
arXiv Detail & Related papers (2020-07-01T17:52:32Z) - Using Deep Reinforcement Learning Methods for Autonomous Vessels in 2D
Environments [11.657524999491029]
In this work, we used deep reinforcement learning combining Q-learning with a neural representation to avoid instability.
Our methodology uses deep q-learning and combines it with a rolling wave planning approach on agile methodology.
Experimental results show that the proposed method enhanced the performance of VVN by 55.31 on average for long-distance missions.
arXiv Detail & Related papers (2020-03-23T12:58:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.