Multi-Agent 3D Map Reconstruction and Change Detection in Microgravity with Free-Flying Robots
- URL: http://arxiv.org/abs/2311.02558v4
- Date: Sat, 14 Sep 2024 14:46:11 GMT
- Title: Multi-Agent 3D Map Reconstruction and Change Detection in Microgravity with Free-Flying Robots
- Authors: Holly Dinkel, Julia Di, Jamie Santos, Keenan Albee, Paulo Borges, Marina Moreira, Oleg Alexandrov, Brian Coltin, Trey Smith,
- Abstract summary: This work presents a framework for multi-agent cooperative mapping and change detection to enable robotic maintenance of space outposts.
One agent is used to reconstruct a 3D model of the environment from sequences of images and corresponding depth information.
Another agent is used to periodically scan the environment for inconsistencies against the 3D model.
- Score: 4.42851967323783
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Assistive free-flyer robots autonomously caring for future crewed outposts -- such as NASA's Astrobee robots on the International Space Station (ISS) -- must be able to detect day-to-day interior changes to track inventory, detect and diagnose faults, and monitor the outpost status. This work presents a framework for multi-agent cooperative mapping and change detection to enable robotic maintenance of space outposts. One agent is used to reconstruct a 3D model of the environment from sequences of images and corresponding depth information. Another agent is used to periodically scan the environment for inconsistencies against the 3D model. Change detection is validated after completing the surveys using real image and pose data collected by Astrobee robots in a ground testing environment and from microgravity aboard the ISS. This work outlines the objectives, requirements, and algorithmic modules for the multi-agent reconstruction system, including recommendations for its use by assistive free-flyers aboard future microgravity outposts.
Related papers
- Opening the Black Box of 3D Reconstruction Error Analysis with VECTOR [8.142689309891368]
VECTOR is a visual analysis tool that improves error inspection for stereo reconstruction.
VECTOR was developed in partnership with the Perseverance Mars Rover and Ingenuity Mars Helicopter terrain reconstruction team at the NASA Jet Propulsion Laboratory.
We report on how this tool was used to debug and improve terrain reconstruction for the Mars 2020 mission.
arXiv Detail & Related papers (2024-08-07T02:03:32Z) - SatSplatYOLO: 3D Gaussian Splatting-based Virtual Object Detection Ensembles for Satellite Feature Recognition [0.0]
We present an approach for mapping geometries and high-confidence detection of components of unknown, non-cooperative satellites on orbit.
We implement accelerated 3D Gaussian splatting to learn a 3D representation of the satellite, render virtual views of the target, and ensemble the YOLOv5 object detector over the virtual views.
arXiv Detail & Related papers (2024-06-04T17:54:20Z) - Unsupervised Change Detection for Space Habitats Using 3D Point Clouds [4.642898625014145]
This work presents an algorithm for scene change detection from point clouds to enable autonomous robotic caretaking in future space habitats.
The algorithm is validated quantitatively and qualitatively using a test dataset collected by an Astrobee robot in the NASA Ames Granite Lab.
arXiv Detail & Related papers (2023-12-04T23:26:12Z) - Care3D: An Active 3D Object Detection Dataset of Real Robotic-Care
Environments [52.425280825457385]
This paper introduces an annotated dataset of real environments.
The captured environments represent areas which are already in use in the field of robotic health care research.
We also provide ground truth data within one room, for assessing SLAM algorithms running directly on a health care robot.
arXiv Detail & Related papers (2023-10-09T10:35:37Z) - Efficient Real-time Smoke Filtration with 3D LiDAR for Search and Rescue
with Autonomous Heterogeneous Robotic Systems [56.838297900091426]
Smoke and dust affect the performance of any mobile robotic platform due to their reliance on onboard perception systems.
This paper proposes a novel modular computation filtration pipeline based on intensity and spatial information.
arXiv Detail & Related papers (2023-08-14T16:48:57Z) - USTC FLICAR: A Sensors Fusion Dataset of LiDAR-Inertial-Camera for
Heavy-duty Autonomous Aerial Work Robots [13.089952067224138]
We present the USTC FLICAR dataset, which is dedicated to the development of simultaneous localization and mapping.
The proposed dataset extends the typical autonomous driving sensing suite to aerial scenes.
Based on the Segment Anything Model (SAM), we produce the Semantic FLICAR dataset, which provides fine-grained semantic segmentation annotations.
arXiv Detail & Related papers (2023-04-04T17:45:06Z) - Aerial Monocular 3D Object Detection [46.26215100532241]
This work proposes a dual-view detection system named DVDET to achieve aerial monocular object detection in both the 2D image space and the 3D physical space.
To address the dataset challenge, we propose a new large-scale simulation dataset named AM3D-Sim, generated by the co-simulation of AirSIM and CARLA, and a new real-world aerial dataset named AM3D-Real, collected by DJI Matrice 300 RTK.
arXiv Detail & Related papers (2022-08-08T08:32:56Z) - Kimera-Multi: Robust, Distributed, Dense Metric-Semantic SLAM for
Multi-Robot Systems [92.26462290867963]
Kimera-Multi is the first multi-robot system that is robust and capable of identifying and rejecting incorrect inter and intra-robot loop closures.
We demonstrate Kimera-Multi in photo-realistic simulations, SLAM benchmarking datasets, and challenging outdoor datasets collected using ground robots.
arXiv Detail & Related papers (2021-06-28T03:56:40Z) - Domain and Modality Gaps for LiDAR-based Person Detection on Mobile
Robots [91.01747068273666]
This paper studies existing LiDAR-based person detectors with a particular focus on mobile robot scenarios.
Experiments revolve around the domain gap between driving and mobile robot scenarios, as well as the modality gap between 3D and 2D LiDAR sensors.
Results provide practical insights into LiDAR-based person detection and facilitate informed decisions for relevant mobile robot designs and applications.
arXiv Detail & Related papers (2021-06-21T16:35:49Z) - LIBRE: The Multiple 3D LiDAR Dataset [54.25307983677663]
We present LIBRE: LiDAR Benchmarking and Reference, a first-of-its-kind dataset featuring 10 different LiDAR sensors.
LIBRE will contribute to the research community to provide a means for a fair comparison of currently available LiDARs.
It will also facilitate the improvement of existing self-driving vehicles and robotics-related software.
arXiv Detail & Related papers (2020-03-13T06:17:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.