Federated Multi-Agent Mapping for Planetary Exploration
- URL: http://arxiv.org/abs/2404.02289v2
- Date: Sun, 29 Sep 2024 12:50:46 GMT
- Title: Federated Multi-Agent Mapping for Planetary Exploration
- Authors: Tiberiu-Ioan Szatmari, Abhishek Cauligi,
- Abstract summary: We propose an approach to jointly train a centralized map model across agents without the need to share raw data.
Our approach leverages implicit neural mapping to generate parsimonious and adaptable representations.
We demonstrate the efficacy of our proposed federated mapping approach using Martian terrains and glacier datasets.
- Score: 0.4143603294943439
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Multi-agent robotic exploration stands to play an important role in space exploration as the next generation of spacecraft robotic systems venture to more extreme and far-flung environments. A key challenge in this new paradigm will be to effectively share and utilize the vast amount of data generated on-board while operating in bandwidth-constrained regimes such as those often found in space missions. Federated learning (FL) is a promising tool for bridging this gap for a host of tasks studied across proposed mission concepts. Drawing inspiration from the upcoming CADRE Lunar rover mission, we study the task of federated multi-agent mapping and propose an approach to jointly train a centralized map model across agents without the need to share raw data. Our approach leverages implicit neural mapping to generate parsimonious and adaptable representations. We further enhance this approach with meta-initialization on Earth datasets, pre-training the network to quickly adapt to extreme and rugged terrain. We demonstrate the efficacy of our proposed federated mapping approach using Martian terrains and glacier datasets and show how it outperforms benchmarks on map reconstruction losses as well as downstream path planning tasks.
Related papers
- Few-shot Scooping Under Domain Shift via Simulated Maximal Deployment Gaps [25.102403059931184]
This paper studies the few-shot scooping problem and proposes a vision-based adaptive scooping strategy.
We train a deep kernel model to adapt to large domain shifts by creating simulated deployment gaps from an offline training dataset.
The proposed method also demonstrates zero-shot transfer capability, successfully adapting to the NASA OWLAT platform.
arXiv Detail & Related papers (2024-08-06T04:25:09Z) - Diffusion-Reinforcement Learning Hierarchical Motion Planning in Adversarial Multi-agent Games [6.532258098619471]
We focus on a motion planning task for an evasive target in a partially observable multi-agent adversarial pursuit-evasion games (PEG)
These pursuit-evasion problems are relevant to various applications, such as search and rescue operations and surveillance robots.
We propose a hierarchical architecture that integrates a high-level diffusion model to plan global paths responsive to environment data.
arXiv Detail & Related papers (2024-03-16T03:53:55Z) - Active Neural Topological Mapping for Multi-Agent Exploration [24.91397816926568]
Multi-agent cooperative exploration problem requires multiple agents to explore an unseen environment via sensory signals in a limited time.
Topological maps are a promising alternative as they consist only of nodes and edges with abstract but essential information.
Deep reinforcement learning has shown great potential for learning (near) optimal policies through fast end-to-end inference.
We propose Multi-Agent Neural Topological Mapping (MANTM) to improve exploration efficiency and generalization for multi-agent exploration tasks.
arXiv Detail & Related papers (2023-11-01T03:06:14Z) - NNPP: A Learning-Based Heuristic Model for Accelerating Optimal Path Planning on Uneven Terrain [5.337162499594818]
We propose the NNPP model for computing the region, enabling foundation algorithms like Astar to find the optimal path solely within this reduced search space.
The NNPP model learns information about start and goal locations, as well as map representations, from numerous pre-annotated optimal path demonstrations.
It is able to textcolorrevisionaccelerate path planning on novel maps.
arXiv Detail & Related papers (2023-08-09T08:31:05Z) - Scaling Data Generation in Vision-and-Language Navigation [116.95534559103788]
We propose an effective paradigm for generating large-scale data for learning.
We apply 1200+ photo-realistic environments from HM3D and Gibson datasets and synthesizes 4.9 million instruction trajectory pairs.
Thanks to our large-scale dataset, the performance of an existing agent can be pushed up (+11% absolute with regard to previous SoTA) to a significantly new best of 80% single-run success rate on the R2R test split by simple imitation learning.
arXiv Detail & Related papers (2023-07-28T16:03:28Z) - Knowledge distillation with Segment Anything (SAM) model for Planetary
Geological Mapping [0.7266531288894184]
We show the effectiveness of a prompt-based foundation model for rapid annotation and quick adaptability to a prime use case of mapping planetary skylights.
Key results indicate that the use of knowledge distillation can significantly reduce the effort required by domain experts for manual annotation.
This approach has the potential to accelerate extra-terrestrial discovery by automatically detecting and segmenting Martian landforms.
arXiv Detail & Related papers (2023-05-12T16:30:58Z) - BEVBert: Multimodal Map Pre-training for Language-guided Navigation [75.23388288113817]
We propose a new map-based pre-training paradigm that is spatial-aware for use in vision-and-language navigation (VLN)
We build a local metric map to explicitly aggregate incomplete observations and remove duplicates, while modeling navigation dependency in a global topological map.
Based on the hybrid map, we devise a pre-training framework to learn a multimodal map representation, which enhances spatial-aware cross-modal reasoning thereby facilitating the language-guided navigation goal.
arXiv Detail & Related papers (2022-12-08T16:27:54Z) - Long-HOT: A Modular Hierarchical Approach for Long-Horizon Object
Transport [83.06265788137443]
We address key challenges in long-horizon embodied exploration and navigation by proposing a new object transport task and a novel modular framework for temporally extended navigation.
Our first contribution is the design of a novel Long-HOT environment focused on deep exploration and long-horizon planning.
We propose a modular hierarchical transport policy (HTP) that builds a topological graph of the scene to perform exploration with the help of weighted frontiers.
arXiv Detail & Related papers (2022-10-28T05:30:49Z) - Mixed-domain Training Improves Multi-Mission Terrain Segmentation [0.9566312408744931]
Current Martian terrain segmentation models require retraining for deployment across different domains.
This research proposes a semi-supervised learning approach that leverages unsupervised contrastive pretraining of a backbone for a multi-mission semantic segmentation for Martian surfaces.
arXiv Detail & Related papers (2022-09-27T20:25:24Z) - PreTraM: Self-Supervised Pre-training via Connecting Trajectory and Map [58.53373202647576]
We propose PreTraM, a self-supervised pre-training scheme for trajectory forecasting.
It consists of two parts: 1) Trajectory-Map Contrastive Learning, where we project trajectories and maps to a shared embedding space with cross-modal contrastive learning, and 2) Map Contrastive Learning, where we enhance map representation with contrastive learning on large quantities of HD-maps.
On top of popular baselines such as AgentFormer and Trajectron++, PreTraM boosts their performance by 5.5% and 6.9% relatively in FDE-10 on the challenging nuScenes dataset.
arXiv Detail & Related papers (2022-04-21T23:01:21Z) - Adaptive Path Planning for UAVs for Multi-Resolution Semantic
Segmentation [28.104584236205405]
A key challenge is planning missions to maximize the value of acquired data in large environments.
This is, for example, relevant for monitoring agricultural fields.
We propose an online planning algorithm which adapts the UAV paths to obtain high-resolution semantic segmentations.
arXiv Detail & Related papers (2022-03-03T11:03:28Z) - AirDet: Few-Shot Detection without Fine-tuning for Autonomous
Exploration [16.032316550612336]
We present AirDet, which is free of fine-tuning by learning class relation with support images.
AirDet achieves comparable or even better results than the exhaustively finetuned methods, reaching up to 40-60% improvements on the baseline.
We present evaluation results on real-world exploration tests from the DARPA Subterranean Challenge.
arXiv Detail & Related papers (2021-12-03T06:41:07Z) - Aerial Images Meet Crowdsourced Trajectories: A New Approach to Robust
Road Extraction [110.61383502442598]
We introduce a novel neural network framework termed Cross-Modal Message Propagation Network (CMMPNet)
CMMPNet is composed of two deep Auto-Encoders for modality-specific representation learning and a tailor-designed Dual Enhancement Module for cross-modal representation refinement.
Experiments on three real-world benchmarks demonstrate the effectiveness of our CMMPNet for robust road extraction.
arXiv Detail & Related papers (2021-11-30T04:30:10Z) - Successor Feature Landmarks for Long-Horizon Goal-Conditioned
Reinforcement Learning [54.378444600773875]
We introduce Successor Feature Landmarks (SFL), a framework for exploring large, high-dimensional environments.
SFL drives exploration by estimating state-novelty and enables high-level planning by abstracting the state-space as a non-parametric landmark-based graph.
We show in our experiments on MiniGrid and ViZDoom that SFL enables efficient exploration of large, high-dimensional state spaces.
arXiv Detail & Related papers (2021-11-18T18:36:05Z) - Deep Learning Aided Packet Routing in Aeronautical Ad-Hoc Networks
Relying on Real Flight Data: From Single-Objective to Near-Pareto
Multi-Objective Optimization [79.96177511319713]
We invoke deep learning (DL) to assist routing in aeronautical ad-hoc networks (AANETs)
A deep neural network (DNN) is conceived for mapping the local geographic information observed by the forwarding node into the information required for determining the optimal next hop.
We extend the DL-aided routing algorithm to a multi-objective scenario, where we aim for simultaneously minimizing the delay, maximizing the path capacity, and maximizing the path lifetime.
arXiv Detail & Related papers (2021-10-28T14:18:22Z) - Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions [49.79068659889639]
Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
arXiv Detail & Related papers (2021-09-12T12:52:20Z) - Batch Exploration with Examples for Scalable Robotic Reinforcement
Learning [63.552788688544254]
Batch Exploration with Examples (BEE) explores relevant regions of the state-space guided by a modest number of human provided images of important states.
BEE is able to tackle challenging vision-based manipulation tasks both in simulation and on a real Franka robot.
arXiv Detail & Related papers (2020-10-22T17:49:25Z) - Occupancy Anticipation for Efficient Exploration and Navigation [97.17517060585875]
We propose occupancy anticipation, where the agent uses its egocentric RGB-D observations to infer the occupancy state beyond the visible regions.
By exploiting context in both the egocentric views and top-down maps our model successfully anticipates a broader map of the environment.
Our approach is the winning entry in the 2020 Habitat PointNav Challenge.
arXiv Detail & Related papers (2020-08-21T03:16:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.