SPEED+: Next Generation Dataset for Spacecraft Pose Estimation across
Domain Gap
- URL: http://arxiv.org/abs/2110.03101v1
- Date: Wed, 6 Oct 2021 23:22:24 GMT
- Title: SPEED+: Next Generation Dataset for Spacecraft Pose Estimation across
Domain Gap
- Authors: Tae Ha Park, Marcus M\"artens, Gurvan Lecuyer, Dario Izzo, Simone
D'Amico
- Abstract summary: This paper introduces SPEED+: the next generation spacecraft pose estimation dataset with specific emphasis on domain gap.
SPEED+ includes 9,531 simulated images of a spacecraft mockup model captured from the Testbed for Rendezvous and Optical Navigation (TRON) facility.
TRON is a first-of-a-kind robotic testbed capable of capturing an arbitrary number of target images with accurate and maximally diverse pose labels.
- Score: 0.9449650062296824
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Autonomous vision-based spaceborne navigation is an enabling technology for
future on-orbit servicing and space logistics missions. While computer vision
in general has benefited from Machine Learning (ML), training and validating
spaceborne ML models are extremely challenging due to the impracticality of
acquiring a large-scale labeled dataset of images of the intended target in the
space environment. Existing datasets, such as Spacecraft PosE Estimation
Dataset (SPEED), have so far mostly relied on synthetic images for both
training and validation, which are easy to mass-produce but fail to resemble
the visual features and illumination variability inherent to the target
spaceborne images. In order to bridge the gap between the current practices and
the intended applications in future space missions, this paper introduces
SPEED+: the next generation spacecraft pose estimation dataset with specific
emphasis on domain gap. In addition to 60,000 synthetic images for training,
SPEED+ includes 9,531 simulated images of a spacecraft mockup model captured
from the Testbed for Rendezvous and Optical Navigation (TRON) facility. TRON is
a first-of-a-kind robotic testbed capable of capturing an arbitrary number of
target images with accurate and maximally diverse pose labels and high-fidelity
spaceborne illumination conditions. SPEED+ will be used in the upcoming
international Satellite Pose Estimation Challenge co-hosted with the Advanced
Concepts Team of the European Space Agency to evaluate and compare the
robustness of spaceborne ML models trained on synthetic images.
Related papers
- Bridging Domain Gap for Flight-Ready Spaceborne Vision [4.14360329494344]
This work presents Spacecraft Pose Network v3 (SPNv3), a Neural Network (NN) for monocular pose estimation of a known, non-cooperative target spacecraft.
SPNv3 is designed and trained to be computationally efficient while providing robustness to spaceborne images that have not been observed during offline training and validation on the ground.
Experiments demonstrate that the final SPNv3 can achieve state-of-the-art pose accuracy on hardware-in-the-loop images from a robotic testbed while having trained exclusively on computer-generated synthetic images.
arXiv Detail & Related papers (2024-09-18T02:56:50Z) - Vision-Based Detection of Uncooperative Targets and Components on Small Satellites [6.999319023465766]
Space debris and inactive satellites pose a threat to the safety and integrity of operational spacecraft.
Recent advancements in computer vision models can be used to improve upon existing methods for tracking such uncooperative targets.
This paper introduces an autonomous detection model designed to identify and monitor these objects using learning and computer vision.
arXiv Detail & Related papers (2024-08-22T02:48:13Z) - SPIN: Spacecraft Imagery for Navigation [8.155713824482767]
We present SPIN, an open-source realistic spacecraft image generation tool for relative navigation between two spacecrafts.
SPIN provides a wide variety of ground-truth data and allows researchers to employ custom 3D models of satellites.
We show a %50 average error reduction in common testbed data that simulates realistic space conditions.
arXiv Detail & Related papers (2024-06-11T17:35:39Z) - Rethinking Transformers Pre-training for Multi-Spectral Satellite
Imagery [78.43828998065071]
Recent advances in unsupervised learning have demonstrated the ability of large vision models to achieve promising results on downstream tasks.
Such pre-training techniques have also been explored recently in the remote sensing domain due to the availability of large amount of unlabelled data.
In this paper, we re-visit transformers pre-training and leverage multi-scale information that is effectively utilized with multiple modalities.
arXiv Detail & Related papers (2024-03-08T16:18:04Z) - LiveHPS: LiDAR-based Scene-level Human Pose and Shape Estimation in Free
Environment [59.320414108383055]
We present LiveHPS, a novel single-LiDAR-based approach for scene-level human pose and shape estimation.
We propose a huge human motion dataset, named FreeMotion, which is collected in various scenarios with diverse human poses.
arXiv Detail & Related papers (2024-02-27T03:08:44Z) - On the Generation of a Synthetic Event-Based Vision Dataset for
Navigation and Landing [69.34740063574921]
This paper presents a methodology for generating event-based vision datasets from optimal landing trajectories.
We construct sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility.
We demonstrate that the pipeline can generate realistic event-based representations of surface features by constructing a dataset of 500 trajectories.
arXiv Detail & Related papers (2023-08-01T09:14:20Z) - Synthetic Data for Semantic Image Segmentation of Imagery of Unmanned
Spacecraft [0.0]
Images of spacecraft photographed from other spacecraft operating in outer space are difficult to come by.
We propose a method for generating synthetic image data labelled for semantic segmentation, generalizable to other tasks.
We present a strong benchmark result on these synthetic data, suggesting that it is feasible to train well-performing image segmentation models for this task.
arXiv Detail & Related papers (2022-11-22T01:30:40Z) - Deep Learning for Real Time Satellite Pose Estimation on Low Power Edge
TPU [58.720142291102135]
In this paper we propose a pose estimation software exploiting neural network architectures.
We show how low power machine learning accelerators could enable Artificial Intelligence exploitation in space.
arXiv Detail & Related papers (2022-04-07T08:53:18Z) - Autonomous Aerial Robot for High-Speed Search and Intercept Applications [86.72321289033562]
A fully-autonomous aerial robot for high-speed object grasping has been proposed.
As an additional sub-task, our system is able to autonomously pierce balloons located in poles close to the surface.
Our approach has been validated in a challenging international competition and has shown outstanding results.
arXiv Detail & Related papers (2021-12-10T11:49:51Z) - Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions [49.79068659889639]
Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
arXiv Detail & Related papers (2021-09-12T12:52:20Z) - Vision-based Neural Scene Representations for Spacecraft [1.0323063834827415]
In advanced mission concepts, spacecraft need to internally model the pose and shape of nearby orbiting objects.
Recent works in neural scene representations show promising results for inferring generic three-dimensional scenes from optical images.
We compare and evaluate the potential of NeRF and GRAF to render novel views and extract the 3D shape of two different spacecraft.
arXiv Detail & Related papers (2021-05-11T08:35:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.