Learning-Augmented Model-Based Planning for Visual Exploration
- URL: http://arxiv.org/abs/2211.07898v2
- Date: Wed, 9 Aug 2023 16:50:42 GMT
- Title: Learning-Augmented Model-Based Planning for Visual Exploration
- Authors: Yimeng Li, Arnab Debnath, Gregory Stein, Jana Kosecka
- Abstract summary: We propose a novel exploration approach using learning-augmented model-based planning.
Visual sensing and advances in semantic mapping of indoor scenes are exploited.
Our approach surpasses the greedy strategies by 2.1% and the RL-based exploration methods by 8.4% in terms of coverage.
- Score: 8.870188183999854
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of time-limited robotic exploration in previously
unseen environments where exploration is limited by a predefined amount of
time. We propose a novel exploration approach using learning-augmented
model-based planning. We generate a set of subgoals associated with frontiers
on the current map and derive a Bellman Equation for exploration with these
subgoals. Visual sensing and advances in semantic mapping of indoor scenes are
exploited for training a deep convolutional neural network to estimate
properties associated with each frontier: the expected unobserved area beyond
the frontier and the expected timesteps (discretized actions) required to
explore it. The proposed model-based planner is guaranteed to explore the whole
scene if time permits. We thoroughly evaluate our approach on a large-scale
pseudo-realistic indoor dataset (Matterport3D) with the Habitat simulator. We
compare our approach with classical and more recent RL-based exploration
methods. Our approach surpasses the greedy strategies by 2.1% and the RL-based
exploration methods by 8.4% in terms of coverage.
Related papers
- Map Prediction and Generative Entropy for Multi-Agent Exploration [37.938606877112]
We develop a map predictor that inpaints the unknown space in a multi-agent 2D occupancy map during an exploration mission.
We identify areas that exhibit high uncertainty in the prediction, which we formalize with the concept of generative entropy.
Our results demonstrate that by using our new task ranking method, we can predict a correct scene significantly faster than with a traditional information-guided method.
arXiv Detail & Related papers (2025-01-22T19:40:04Z) - FrontierNet: Learning Visual Cues to Explore [54.8265603996238]
This work aims at leveraging 2D visual cues for efficient autonomous exploration, addressing the limitations of extracting goal poses from a 3D map.
We propose a image-only frontier-based exploration system, with FrontierNet as a core component developed in this work.
Our approach provides an alternative to existing 3D-dependent exploration systems, achieving a 16% improvement in early-stage exploration efficiency.
arXiv Detail & Related papers (2025-01-08T16:25:32Z) - How To Not Train Your Dragon: Training-free Embodied Object Goal
Navigation with Semantic Frontiers [94.46825166907831]
We present a training-free solution to tackle the object goal navigation problem in Embodied AI.
Our method builds a structured scene representation based on the classic visual simultaneous localization and mapping (V-SLAM) framework.
Our method propagates semantics on the scene graphs based on language priors and scene statistics to introduce semantic knowledge to the geometric frontiers.
arXiv Detail & Related papers (2023-05-26T13:38:33Z) - Incremental 3D Scene Completion for Safe and Efficient Exploration
Mapping and Planning [60.599223456298915]
We propose a novel way to integrate deep learning into exploration by leveraging 3D scene completion for informed, safe, and interpretable mapping and planning.
We show that our method can speed up coverage of an environment by 73% compared to the baselines with only minimal reduction in map accuracy.
Even if scene completions are not included in the final map, we show that they can be used to guide the robot to choose more informative paths, speeding up the measurement of the scene with the robot's sensors by 35%.
arXiv Detail & Related papers (2022-08-17T14:19:33Z) - Focus on Impact: Indoor Exploration with Intrinsic Motivation [45.97756658635314]
In this work, we propose to train a model with a purely intrinsic reward signal to guide exploration.
We include a neural-based density model and replace the traditional count-based regularization with an estimated pseudo-count of previously visited states.
We also show that a robot equipped with the proposed approach seamlessly adapts to point-goal navigation and real-world deployment.
arXiv Detail & Related papers (2021-09-14T18:00:07Z) - MADE: Exploration via Maximizing Deviation from Explored Regions [48.49228309729319]
In online reinforcement learning (RL), efficient exploration remains challenging in high-dimensional environments with sparse rewards.
We propose a new exploration approach via textitmaximizing the deviation of the occupancy of the next policy from the explored regions.
Our approach significantly improves sample efficiency over state-of-the-art methods.
arXiv Detail & Related papers (2021-06-18T17:57:00Z) - Deep Reinforcement Learning for Adaptive Exploration of Unknown
Environments [6.90777229452271]
We develop an adaptive exploration approach to trade off between exploration and exploitation in one single step for UAVs.
The proposed approach uses a map segmentation technique to decompose the environment map into smaller, tractable maps.
The results demonstrate that our proposed approach is capable of navigating through randomly generated environments and covering more AoI in less time steps compared to the baselines.
arXiv Detail & Related papers (2021-05-04T16:29:44Z) - Autonomous UAV Exploration of Dynamic Environments via Incremental
Sampling and Probabilistic Roadmap [0.3867363075280543]
We propose a novel dynamic exploration planner (DEP) for exploring unknown environments using incremental sampling and Probabilistic Roadmap (PRM)
Our method safely explores dynamic environments and outperforms the benchmark planners in terms of exploration time, path length, and computational time.
arXiv Detail & Related papers (2020-10-14T22:52:37Z) - Latent World Models For Intrinsically Motivated Exploration [140.21871701134626]
We present a self-supervised representation learning method for image-based observations.
We consider episodic and life-long uncertainties to guide the exploration of partially observable environments.
arXiv Detail & Related papers (2020-10-05T19:47:04Z) - Occupancy Anticipation for Efficient Exploration and Navigation [97.17517060585875]
We propose occupancy anticipation, where the agent uses its egocentric RGB-D observations to infer the occupancy state beyond the visible regions.
By exploiting context in both the egocentric views and top-down maps our model successfully anticipates a broader map of the environment.
Our approach is the winning entry in the 2020 Habitat PointNav Challenge.
arXiv Detail & Related papers (2020-08-21T03:16:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.