Map Prediction and Generative Entropy for Multi-Agent Exploration
- URL: http://arxiv.org/abs/2501.13189v1
- Date: Wed, 22 Jan 2025 19:40:04 GMT
- Title: Map Prediction and Generative Entropy for Multi-Agent Exploration
- Authors: Alexander Spinos, Bradley Woosley, Justin Rokisky, Christopher Korpela, John G. Rogers III, Brian A. Bittner,
- Abstract summary: We develop a map predictor that inpaints the unknown space in a multi-agent 2D occupancy map during an exploration mission.
We identify areas that exhibit high uncertainty in the prediction, which we formalize with the concept of generative entropy.
Our results demonstrate that by using our new task ranking method, we can predict a correct scene significantly faster than with a traditional information-guided method.
- Score: 37.938606877112
- License:
- Abstract: Traditionally, autonomous reconnaissance applications have acted on explicit sets of historical observations. Aided by recent breakthroughs in generative technologies, this work enables robot teams to act beyond what is currently known about the environment by inferring a distribution of reasonable interpretations of the scene. We developed a map predictor that inpaints the unknown space in a multi-agent 2D occupancy map during an exploration mission. From a comparison of several inpainting methods, we found that a fine-tuned latent diffusion inpainting model could provide rich and coherent interpretations of simulated urban environments with relatively little computation time. By iteratively inferring interpretations of the scene throughout an exploration run, we are able to identify areas that exhibit high uncertainty in the prediction, which we formalize with the concept of generative entropy. We prioritize tasks in regions of high generative entropy, hypothesizing that this will expedite convergence on an accurate predicted map of the scene. In our study we juxtapose this new paradigm of task ranking with the state of the art, which ranks regions to explore by those which maximize expected information recovery. We compare both of these methods in a simulated urban environment with three vehicles. Our results demonstrate that by using our new task ranking method, we can predict a correct scene significantly faster than with a traditional information-guided method.
Related papers
- MapEx: Indoor Structure Exploration with Probabilistic Information Gain from Global Map Predictions [6.420382919565209]
We focus on robots exploring structured indoor environments which are often predictable and composed of repeating patterns.
Recent works use deep learning techniques to predict unknown regions of the map, using these predictions for information gain calculation.
We introduce MapEx, a new exploration framework that uses predicted maps to form a probabilistic sensor model for information gain estimation.
arXiv Detail & Related papers (2024-09-23T22:48:04Z) - Implicit Occupancy Flow Fields for Perception and Prediction in
Self-Driving [68.95178518732965]
A self-driving vehicle (SDV) must be able to perceive its surroundings and predict the future behavior of other traffic participants.
Existing works either perform object detection followed by trajectory of the detected objects, or predict dense occupancy and flow grids for the whole scene.
This motivates our unified approach to perception and future prediction that implicitly represents occupancy and flow over time with a single neural network.
arXiv Detail & Related papers (2023-08-02T23:39:24Z) - SePaint: Semantic Map Inpainting via Multinomial Diffusion [12.217566404643033]
We propose SePaint, an inpainting model for semantic data based on generative multinomial diffusion.
We propose a novel and efficient condition strategy, Look-Back Condition (LB-Con), which performs one-step look-back operations.
We have conducted extensive experiments on different datasets, showing our proposed model outperforms commonly used methods in various robotic applications.
arXiv Detail & Related papers (2023-03-05T18:04:28Z) - TempSAL -- Uncovering Temporal Information for Deep Saliency Prediction [64.63645677568384]
We introduce a novel saliency prediction model that learns to output saliency maps in sequential time intervals.
Our approach locally modulates the saliency predictions by combining the learned temporal maps.
Our code will be publicly available on GitHub.
arXiv Detail & Related papers (2023-01-05T22:10:16Z) - Learning-Augmented Model-Based Planning for Visual Exploration [8.870188183999854]
We propose a novel exploration approach using learning-augmented model-based planning.
Visual sensing and advances in semantic mapping of indoor scenes are exploited.
Our approach surpasses the greedy strategies by 2.1% and the RL-based exploration methods by 8.4% in terms of coverage.
arXiv Detail & Related papers (2022-11-15T04:53:35Z) - Incremental 3D Scene Completion for Safe and Efficient Exploration
Mapping and Planning [60.599223456298915]
We propose a novel way to integrate deep learning into exploration by leveraging 3D scene completion for informed, safe, and interpretable mapping and planning.
We show that our method can speed up coverage of an environment by 73% compared to the baselines with only minimal reduction in map accuracy.
Even if scene completions are not included in the final map, we show that they can be used to guide the robot to choose more informative paths, speeding up the measurement of the scene with the robot's sensors by 35%.
arXiv Detail & Related papers (2022-08-17T14:19:33Z) - MUSE-VAE: Multi-Scale VAE for Environment-Aware Long Term Trajectory
Prediction [28.438787700968703]
Conditional MUSE offers diverse and simultaneously more accurate predictions compared to the current state-of-the-art.
We demonstrate these assertions through a comprehensive set of experiments on nuScenes and SDD benchmarks as well as PFSD, a new synthetic dataset.
arXiv Detail & Related papers (2022-01-18T18:40:03Z) - Latent World Models For Intrinsically Motivated Exploration [140.21871701134626]
We present a self-supervised representation learning method for image-based observations.
We consider episodic and life-long uncertainties to guide the exploration of partially observable environments.
arXiv Detail & Related papers (2020-10-05T19:47:04Z) - Occupancy Anticipation for Efficient Exploration and Navigation [97.17517060585875]
We propose occupancy anticipation, where the agent uses its egocentric RGB-D observations to infer the occupancy state beyond the visible regions.
By exploiting context in both the egocentric views and top-down maps our model successfully anticipates a broader map of the environment.
Our approach is the winning entry in the 2020 Habitat PointNav Challenge.
arXiv Detail & Related papers (2020-08-21T03:16:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.