Incremental 3D Scene Completion for Safe and Efficient Exploration
Mapping and Planning
- URL: http://arxiv.org/abs/2208.08307v1
- Date: Wed, 17 Aug 2022 14:19:33 GMT
- Title: Incremental 3D Scene Completion for Safe and Efficient Exploration
Mapping and Planning
- Authors: Lukas Schmid, Mansoor Nasir Cheema, Victor Reijgwart, Roland Siegwart,
Federico Tombari, and Cesar Cadena
- Abstract summary: We propose a novel way to integrate deep learning into exploration by leveraging 3D scene completion for informed, safe, and interpretable mapping and planning.
We show that our method can speed up coverage of an environment by 73% compared to the baselines with only minimal reduction in map accuracy.
Even if scene completions are not included in the final map, we show that they can be used to guide the robot to choose more informative paths, speeding up the measurement of the scene with the robot's sensors by 35%.
- Score: 60.599223456298915
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Exploration of unknown environments is a fundamental problem in robotics and
an essential component in numerous applications of autonomous systems. A major
challenge in exploring unknown environments is that the robot has to plan with
the limited information available at each time step. While most current
approaches rely on heuristics and assumption to plan paths based on these
partial observations, we instead propose a novel way to integrate deep learning
into exploration by leveraging 3D scene completion for informed, safe, and
interpretable exploration mapping and planning. Our approach, SC-Explorer,
combines scene completion using a novel incremental fusion mechanism and a
newly proposed hierarchical multi-layer mapping approach, to guarantee safety
and efficiency of the robot. We further present an informative path planning
method, leveraging the capabilities of our mapping approach and a novel
scene-completion-aware information gain. While our method is generally
applicable, we evaluate it in the use case of a Micro Aerial Vehicle (MAV). We
thoroughly study each component in high-fidelity simulation experiments using
only mobile hardware, and show that our method can speed up coverage of an
environment by 73% compared to the baselines with only minimal reduction in map
accuracy. Even if scene completions are not included in the final map, we show
that they can be used to guide the robot to choose more informative paths,
speeding up the measurement of the scene with the robot's sensors by 35%. We
make our methods available as open-source.
Related papers
- Explore until Confident: Efficient Exploration for Embodied Question Answering [32.27111287314288]
We leverage the strong semantic reasoning capabilities of large vision-language models to efficiently explore and answer questions.
We propose a method that first builds a semantic map of the scene based on depth information and via visual prompting of a VLM.
Next, we use conformal prediction to calibrate the VLM's question answering confidence, allowing the robot to know when to stop exploration.
arXiv Detail & Related papers (2024-03-23T22:04:03Z) - Deep Reinforcement Learning with Dynamic Graphs for Adaptive Informative Path Planning [22.48658555542736]
Key task in robotic data acquisition is planning paths through an initially unknown environment to collect observations.
We propose a novel deep reinforcement learning approach for adaptively replanning robot paths to map targets of interest in unknown 3D environments.
arXiv Detail & Related papers (2024-02-07T14:24:41Z) - Neural Scene Representation for Locomotion on Structured Terrain [56.48607865960868]
We propose a learning-based method to reconstruct the local terrain for a mobile robot traversing urban environments.
Using a stream of depth measurements from the onboard cameras and the robot's trajectory, the estimates the topography in the robot's vicinity.
We propose a 3D reconstruction model that faithfully reconstructs the scene, despite the noisy measurements and large amounts of missing data coming from the blind spots of the camera arrangement.
arXiv Detail & Related papers (2022-06-16T10:45:17Z) - ViKiNG: Vision-Based Kilometer-Scale Navigation with Geographic Hints [94.60414567852536]
Long-range navigation requires both planning and reasoning about local traversability.
We propose a learning-based approach that integrates learning and planning.
ViKiNG can leverage its image-based learned controller and goal-directed to navigate to goals up to 3 kilometers away.
arXiv Detail & Related papers (2022-02-23T02:14:23Z) - Unsupervised Online Learning for Robotic Interestingness with Visual
Memory [9.189959184116962]
We develop a method that automatically adapts online to the environment to report interesting scenes quickly.
We achieve an average of 20% higher accuracy than the state-of-the-art unsupervised methods in a subterranean tunnel environment.
arXiv Detail & Related papers (2021-11-18T16:51:39Z) - Rapid Exploration for Open-World Navigation with Latent Goal Models [78.45339342966196]
We describe a robotic learning system for autonomous exploration and navigation in diverse, open-world environments.
At the core of our method is a learned latent variable model of distances and actions, along with a non-parametric topological memory of images.
We use an information bottleneck to regularize the learned policy, giving us (i) a compact visual representation of goals, (ii) improved generalization capabilities, and (iii) a mechanism for sampling feasible goals for exploration.
arXiv Detail & Related papers (2021-04-12T23:14:41Z) - SOON: Scenario Oriented Object Navigation with Graph-based Exploration [102.74649829684617]
The ability to navigate like a human towards a language-guided target from anywhere in a 3D embodied environment is one of the 'holy grail' goals of intelligent robots.
Most visual navigation benchmarks focus on navigating toward a target from a fixed starting point, guided by an elaborate set of instructions that depicts step-by-step.
This approach deviates from real-world problems in which human-only describes what the object and its surrounding look like and asks the robot to start navigation from anywhere.
arXiv Detail & Related papers (2021-03-31T15:01:04Z) - ViNG: Learning Open-World Navigation with Visual Goals [82.84193221280216]
We propose a learning-based navigation system for reaching visually indicated goals.
We show that our system, which we call ViNG, outperforms previously-proposed methods for goal-conditioned reinforcement learning.
We demonstrate ViNG on a number of real-world applications, such as last-mile delivery and warehouse inspection.
arXiv Detail & Related papers (2020-12-17T18:22:32Z) - Autonomous UAV Exploration of Dynamic Environments via Incremental
Sampling and Probabilistic Roadmap [0.3867363075280543]
We propose a novel dynamic exploration planner (DEP) for exploring unknown environments using incremental sampling and Probabilistic Roadmap (PRM)
Our method safely explores dynamic environments and outperforms the benchmark planners in terms of exploration time, path length, and computational time.
arXiv Detail & Related papers (2020-10-14T22:52:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.