SPEED+: Next Generation Dataset for Spacecraft Pose Estimation across
Domain Gap
- URL: http://arxiv.org/abs/2110.03101v1
- Date: Wed, 6 Oct 2021 23:22:24 GMT
- Title: SPEED+: Next Generation Dataset for Spacecraft Pose Estimation across
Domain Gap
- Authors: Tae Ha Park, Marcus M\"artens, Gurvan Lecuyer, Dario Izzo, Simone
D'Amico
- Abstract summary: This paper introduces SPEED+: the next generation spacecraft pose estimation dataset with specific emphasis on domain gap.
SPEED+ includes 9,531 simulated images of a spacecraft mockup model captured from the Testbed for Rendezvous and Optical Navigation (TRON) facility.
TRON is a first-of-a-kind robotic testbed capable of capturing an arbitrary number of target images with accurate and maximally diverse pose labels.
- Score: 0.9449650062296824
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Autonomous vision-based spaceborne navigation is an enabling technology for
future on-orbit servicing and space logistics missions. While computer vision
in general has benefited from Machine Learning (ML), training and validating
spaceborne ML models are extremely challenging due to the impracticality of
acquiring a large-scale labeled dataset of images of the intended target in the
space environment. Existing datasets, such as Spacecraft PosE Estimation
Dataset (SPEED), have so far mostly relied on synthetic images for both
training and validation, which are easy to mass-produce but fail to resemble
the visual features and illumination variability inherent to the target
spaceborne images. In order to bridge the gap between the current practices and
the intended applications in future space missions, this paper introduces
SPEED+: the next generation spacecraft pose estimation dataset with specific
emphasis on domain gap. In addition to 60,000 synthetic images for training,
SPEED+ includes 9,531 simulated images of a spacecraft mockup model captured
from the Testbed for Rendezvous and Optical Navigation (TRON) facility. TRON is
a first-of-a-kind robotic testbed capable of capturing an arbitrary number of
target images with accurate and maximally diverse pose labels and high-fidelity
spaceborne illumination conditions. SPEED+ will be used in the upcoming
international Satellite Pose Estimation Challenge co-hosted with the Advanced
Concepts Team of the European Space Agency to evaluate and compare the
robustness of spaceborne ML models trained on synthetic images.
Related papers
- SPIN: Spacecraft Imagery for Navigation [8.155713824482767]
We present SPIN, an open-source realistic spacecraft image generation tool for relative navigation between two spacecrafts.
SPIN provides a wide variety of ground-truth data and allows researchers to employ custom 3D models of satellites.
We show a %50 average error reduction in common testbed data that simulates realistic space conditions.
arXiv Detail & Related papers (2024-06-11T17:35:39Z) - Getting it Right: Improving Spatial Consistency in Text-to-Image Models [103.52640413616436]
One of the key shortcomings in current text-to-image (T2I) models is their inability to consistently generate images which faithfully follow the spatial relationships specified in the text prompt.
We create SPRIGHT, the first spatially-focused, large scale dataset, by re-captioning 6 million images from 4 widely used vision datasets.
We attain state-of-the-art on T2I-CompBench with a spatial score of 0.2133, by fine-tuning on 500 images.
arXiv Detail & Related papers (2024-04-01T15:55:25Z) - Rethinking Transformers Pre-training for Multi-Spectral Satellite
Imagery [78.43828998065071]
Recent advances in unsupervised learning have demonstrated the ability of large vision models to achieve promising results on downstream tasks.
Such pre-training techniques have also been explored recently in the remote sensing domain due to the availability of large amount of unlabelled data.
In this paper, we re-visit transformers pre-training and leverage multi-scale information that is effectively utilized with multiple modalities.
arXiv Detail & Related papers (2024-03-08T16:18:04Z) - LiveHPS: LiDAR-based Scene-level Human Pose and Shape Estimation in Free
Environment [59.320414108383055]
We present LiveHPS, a novel single-LiDAR-based approach for scene-level human pose and shape estimation.
We propose a huge human motion dataset, named FreeMotion, which is collected in various scenarios with diverse human poses.
arXiv Detail & Related papers (2024-02-27T03:08:44Z) - On the Generation of a Synthetic Event-Based Vision Dataset for
Navigation and Landing [69.34740063574921]
This paper presents a methodology for generating event-based vision datasets from optimal landing trajectories.
We construct sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility.
We demonstrate that the pipeline can generate realistic event-based representations of surface features by constructing a dataset of 500 trajectories.
arXiv Detail & Related papers (2023-08-01T09:14:20Z) - SpaceYOLO: A Human-Inspired Model for Real-time, On-board Spacecraft
Feature Detection [0.0]
Real-time, automated spacecraft feature recognition is needed to pinpoint the locations of collision hazards.
New algorithm SpaceYOLO fuses a state-of-the-art object detector YOLOv5 with a separate neural network based on human-inspired decision processes.
Performance in autonomous spacecraft detection of SpaceYOLO is compared to ordinary YOLOv5 in hardware-in-the-loop experiments.
arXiv Detail & Related papers (2023-02-02T02:11:39Z) - Synthetic Data for Semantic Image Segmentation of Imagery of Unmanned
Spacecraft [0.0]
Images of spacecraft photographed from other spacecraft operating in outer space are difficult to come by.
We propose a method for generating synthetic image data labelled for semantic segmentation, generalizable to other tasks.
We present a strong benchmark result on these synthetic data, suggesting that it is feasible to train well-performing image segmentation models for this task.
arXiv Detail & Related papers (2022-11-22T01:30:40Z) - Deep Learning for Real Time Satellite Pose Estimation on Low Power Edge
TPU [58.720142291102135]
In this paper we propose a pose estimation software exploiting neural network architectures.
We show how low power machine learning accelerators could enable Artificial Intelligence exploitation in space.
arXiv Detail & Related papers (2022-04-07T08:53:18Z) - Autonomous Aerial Robot for High-Speed Search and Intercept Applications [86.72321289033562]
A fully-autonomous aerial robot for high-speed object grasping has been proposed.
As an additional sub-task, our system is able to autonomously pierce balloons located in poles close to the surface.
Our approach has been validated in a challenging international competition and has shown outstanding results.
arXiv Detail & Related papers (2021-12-10T11:49:51Z) - Vision-based Neural Scene Representations for Spacecraft [1.0323063834827415]
In advanced mission concepts, spacecraft need to internally model the pose and shape of nearby orbiting objects.
Recent works in neural scene representations show promising results for inferring generic three-dimensional scenes from optical images.
We compare and evaluate the potential of NeRF and GRAF to render novel views and extract the 3D shape of two different spacecraft.
arXiv Detail & Related papers (2021-05-11T08:35:05Z) - SPARK: SPAcecraft Recognition leveraging Knowledge of Space Environment [10.068428438297563]
This paper proposes the SPARK dataset as a new unique space object multi-modal image dataset.
The SPARK dataset has been generated under a realistic space simulation environment.
It provides about 150k images per modality, RGB and depth, and 11 classes for spacecrafts and debris.
arXiv Detail & Related papers (2021-04-13T07:16:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.