SPIN: Spacecraft Imagery for Navigation
- URL: http://arxiv.org/abs/2406.07500v3
- Date: Thu, 05 Dec 2024 10:23:27 GMT
- Title: SPIN: Spacecraft Imagery for Navigation
- Authors: Javier Montalvo, Juan Ignacio Bravo Pérez-Villar, Álvaro García-Martín, Pablo Carballeira, Jesús Bescós,
- Abstract summary: The scarcity of data acquired under actual space operational conditions poses a significant challenge for developing learning-based visual navigation algorithms.
We present SPIN, an open-source tool designed to support a wide range of visual navigation scenarios in space.
SPIN provides multiple modalities of ground-truth data and allows researchers to employ custom 3D models of satellites.
- Score: 10.306879210363512
- License:
- Abstract: The scarcity of data acquired under actual space operational conditions poses a significant challenge for developing learning-based visual navigation algorithms crucial for autonomous spacecraft navigation. This data shortage is primarily due to the prohibitive costs and inherent complexities of space operations. While existing datasets, predominantly relying on computer-simulated data, have partially addressed this gap, they present notable limitations. Firstly, these datasets often utilize proprietary image generation tools, restricting the evaluation of navigation methods in novel, unseen scenarios. Secondly, they provide limited ground-truth data, typically focusing solely on the spacecraft's translation and rotation relative to the camera. To address these limitations, we present SPIN (SPacecraft Imagery for Navigation), an open-source spacecraft image generation tool designed to support a wide range of visual navigation scenarios in space, with a particular focus on relative navigation tasks. SPIN provides multiple modalities of ground-truth data and allows researchers to employ custom 3D models of satellites, define specific camera-relative poses, and adjust settings such as camera parameters or environmental illumination conditions. We also propose a method for exploiting our tool as a data augmentation module. We validate our tool on the spacecraft pose estimation task by training with a SPIN-generated replica of SPEED+, reaching a 47% average error reduction on SPEED+ testbed data (that simulates realistic space conditions), further reducing it to a 60% error reduction when using SPIN as a data augmentation method. Both the SPIN tool (and source code) and our SPIN-generated version of SPEED+ will be publicly released upon paper acceptance on GitHub. https://github.com/vpulab/SPIN
Related papers
- The Devil is in the Details: Simple Remedies for Image-to-LiDAR Representation Learning [21.088879084249328]
We focus on overlooked design choices along the spatial and temporal axes.
We find that fundamental design elements, e.g., the LiDAR coordinate system, quantization according to the existing input interface, are more critical than developing loss functions.
arXiv Detail & Related papers (2025-01-16T11:44:29Z) - Mixing Data-driven and Geometric Models for Satellite Docking Port State Estimation using an RGB or Event Camera [4.9788231201543]
This work focuses on satellite-agnostic operations using the recently released Lockheed Martin Mission Augmentation Port (LM-MAP) as the target.
We present a pipeline for automated satellite docking port detection and state estimation using monocular vision data from standard RGB sensing or an event camera.
arXiv Detail & Related papers (2024-09-23T22:28:09Z) - Data downlink prioritization using image classification on-board a 6U CubeSat [0.0]
Kyushu Institute of Technology and collaborators have launched a joint venture for a nanosatellite mission, VERTECS.
The primary mission is to elucidate the formation history of stars by observing the optical-wavelength cosmic background radiation.
The VERTECS satellite will be equipped with a small-aperture telescope and a high-precision attitude control system to capture the cosmic data for analysis on the ground.
We propose an on-orbit system to autonomously classify and then compress desirable image data for data downlink prioritization and optimization.
arXiv Detail & Related papers (2024-08-27T08:38:45Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - Multimodal Dataset from Harsh Sub-Terranean Environment with Aerosol
Particles for Frontier Exploration [55.41644538483948]
This paper introduces a multimodal dataset from the harsh and unstructured underground environment with aerosol particles.
It contains synchronized raw data measurements from all onboard sensors in Robot Operating System (ROS) format.
The focus of this paper is not only to capture both temporal and spatial data diversities but also to present the impact of harsh conditions on captured data.
arXiv Detail & Related papers (2023-04-27T20:21:18Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - Deep Learning for Real Time Satellite Pose Estimation on Low Power Edge
TPU [58.720142291102135]
In this paper we propose a pose estimation software exploiting neural network architectures.
We show how low power machine learning accelerators could enable Artificial Intelligence exploitation in space.
arXiv Detail & Related papers (2022-04-07T08:53:18Z) - LiDARCap: Long-range Marker-less 3D Human Motion Capture with LiDAR
Point Clouds [58.402752909624716]
Existing motion capture datasets are largely short-range and cannot yet fit the need of long-range applications.
We propose LiDARHuman26M, a new human motion capture dataset captured by LiDAR at a much longer range to overcome this limitation.
Our dataset also includes the ground truth human motions acquired by the IMU system and the synchronous RGB images.
arXiv Detail & Related papers (2022-03-28T12:52:45Z) - SPEED+: Next Generation Dataset for Spacecraft Pose Estimation across
Domain Gap [0.9449650062296824]
This paper introduces SPEED+: the next generation spacecraft pose estimation dataset with specific emphasis on domain gap.
SPEED+ includes 9,531 simulated images of a spacecraft mockup model captured from the Testbed for Rendezvous and Optical Navigation (TRON) facility.
TRON is a first-of-a-kind robotic testbed capable of capturing an arbitrary number of target images with accurate and maximally diverse pose labels.
arXiv Detail & Related papers (2021-10-06T23:22:24Z) - Memory-Augmented Reinforcement Learning for Image-Goal Navigation [67.3963444878746]
We present a novel method that leverages a cross-episode memory to learn to navigate.
In order to avoid overfitting, we propose to use data augmentation on the RGB input during training.
We obtain this competitive performance from RGB input only, without access to additional sensors such as position or depth.
arXiv Detail & Related papers (2021-01-13T16:30:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.