Federated Multi-Agent Mapping for Planetary Exploration
- URL: http://arxiv.org/abs/2404.02289v2
- Date: Sun, 29 Sep 2024 12:50:46 GMT
- Title: Federated Multi-Agent Mapping for Planetary Exploration
- Authors: Tiberiu-Ioan Szatmari, Abhishek Cauligi,
- Abstract summary: We propose an approach to jointly train a centralized map model across agents without the need to share raw data.
Our approach leverages implicit neural mapping to generate parsimonious and adaptable representations.
We demonstrate the efficacy of our proposed federated mapping approach using Martian terrains and glacier datasets.
- Score: 0.4143603294943439
- License:
- Abstract: Multi-agent robotic exploration stands to play an important role in space exploration as the next generation of spacecraft robotic systems venture to more extreme and far-flung environments. A key challenge in this new paradigm will be to effectively share and utilize the vast amount of data generated on-board while operating in bandwidth-constrained regimes such as those often found in space missions. Federated learning (FL) is a promising tool for bridging this gap for a host of tasks studied across proposed mission concepts. Drawing inspiration from the upcoming CADRE Lunar rover mission, we study the task of federated multi-agent mapping and propose an approach to jointly train a centralized map model across agents without the need to share raw data. Our approach leverages implicit neural mapping to generate parsimonious and adaptable representations. We further enhance this approach with meta-initialization on Earth datasets, pre-training the network to quickly adapt to extreme and rugged terrain. We demonstrate the efficacy of our proposed federated mapping approach using Martian terrains and glacier datasets and show how it outperforms benchmarks on map reconstruction losses as well as downstream path planning tasks.
Related papers
- Few-shot Scooping Under Domain Shift via Simulated Maximal Deployment Gaps [25.102403059931184]
This paper studies the few-shot scooping problem and proposes a vision-based adaptive scooping strategy.
We train a deep kernel model to adapt to large domain shifts by creating simulated deployment gaps from an offline training dataset.
The proposed method also demonstrates zero-shot transfer capability, successfully adapting to the NASA OWLAT platform.
arXiv Detail & Related papers (2024-08-06T04:25:09Z) - Active Neural Topological Mapping for Multi-Agent Exploration [24.91397816926568]
Multi-agent cooperative exploration problem requires multiple agents to explore an unseen environment via sensory signals in a limited time.
Topological maps are a promising alternative as they consist only of nodes and edges with abstract but essential information.
Deep reinforcement learning has shown great potential for learning (near) optimal policies through fast end-to-end inference.
We propose Multi-Agent Neural Topological Mapping (MANTM) to improve exploration efficiency and generalization for multi-agent exploration tasks.
arXiv Detail & Related papers (2023-11-01T03:06:14Z) - Knowledge distillation with Segment Anything (SAM) model for Planetary
Geological Mapping [0.7266531288894184]
We show the effectiveness of a prompt-based foundation model for rapid annotation and quick adaptability to a prime use case of mapping planetary skylights.
Key results indicate that the use of knowledge distillation can significantly reduce the effort required by domain experts for manual annotation.
This approach has the potential to accelerate extra-terrestrial discovery by automatically detecting and segmenting Martian landforms.
arXiv Detail & Related papers (2023-05-12T16:30:58Z) - BEVBert: Multimodal Map Pre-training for Language-guided Navigation [75.23388288113817]
We propose a new map-based pre-training paradigm that is spatial-aware for use in vision-and-language navigation (VLN)
We build a local metric map to explicitly aggregate incomplete observations and remove duplicates, while modeling navigation dependency in a global topological map.
Based on the hybrid map, we devise a pre-training framework to learn a multimodal map representation, which enhances spatial-aware cross-modal reasoning thereby facilitating the language-guided navigation goal.
arXiv Detail & Related papers (2022-12-08T16:27:54Z) - Long-HOT: A Modular Hierarchical Approach for Long-Horizon Object
Transport [83.06265788137443]
We address key challenges in long-horizon embodied exploration and navigation by proposing a new object transport task and a novel modular framework for temporally extended navigation.
Our first contribution is the design of a novel Long-HOT environment focused on deep exploration and long-horizon planning.
We propose a modular hierarchical transport policy (HTP) that builds a topological graph of the scene to perform exploration with the help of weighted frontiers.
arXiv Detail & Related papers (2022-10-28T05:30:49Z) - Mixed-domain Training Improves Multi-Mission Terrain Segmentation [0.9566312408744931]
Current Martian terrain segmentation models require retraining for deployment across different domains.
This research proposes a semi-supervised learning approach that leverages unsupervised contrastive pretraining of a backbone for a multi-mission semantic segmentation for Martian surfaces.
arXiv Detail & Related papers (2022-09-27T20:25:24Z) - Adaptive Path Planning for UAVs for Multi-Resolution Semantic
Segmentation [28.104584236205405]
A key challenge is planning missions to maximize the value of acquired data in large environments.
This is, for example, relevant for monitoring agricultural fields.
We propose an online planning algorithm which adapts the UAV paths to obtain high-resolution semantic segmentations.
arXiv Detail & Related papers (2022-03-03T11:03:28Z) - Aerial Images Meet Crowdsourced Trajectories: A New Approach to Robust
Road Extraction [110.61383502442598]
We introduce a novel neural network framework termed Cross-Modal Message Propagation Network (CMMPNet)
CMMPNet is composed of two deep Auto-Encoders for modality-specific representation learning and a tailor-designed Dual Enhancement Module for cross-modal representation refinement.
Experiments on three real-world benchmarks demonstrate the effectiveness of our CMMPNet for robust road extraction.
arXiv Detail & Related papers (2021-11-30T04:30:10Z) - Successor Feature Landmarks for Long-Horizon Goal-Conditioned
Reinforcement Learning [54.378444600773875]
We introduce Successor Feature Landmarks (SFL), a framework for exploring large, high-dimensional environments.
SFL drives exploration by estimating state-novelty and enables high-level planning by abstracting the state-space as a non-parametric landmark-based graph.
We show in our experiments on MiniGrid and ViZDoom that SFL enables efficient exploration of large, high-dimensional state spaces.
arXiv Detail & Related papers (2021-11-18T18:36:05Z) - Batch Exploration with Examples for Scalable Robotic Reinforcement
Learning [63.552788688544254]
Batch Exploration with Examples (BEE) explores relevant regions of the state-space guided by a modest number of human provided images of important states.
BEE is able to tackle challenging vision-based manipulation tasks both in simulation and on a real Franka robot.
arXiv Detail & Related papers (2020-10-22T17:49:25Z) - Occupancy Anticipation for Efficient Exploration and Navigation [97.17517060585875]
We propose occupancy anticipation, where the agent uses its egocentric RGB-D observations to infer the occupancy state beyond the visible regions.
By exploiting context in both the egocentric views and top-down maps our model successfully anticipates a broader map of the environment.
Our approach is the winning entry in the 2020 Habitat PointNav Challenge.
arXiv Detail & Related papers (2020-08-21T03:16:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.