SPADES: A Realistic Spacecraft Pose Estimation Dataset using Event
Sensing
- URL: http://arxiv.org/abs/2311.05310v1
- Date: Thu, 9 Nov 2023 12:14:47 GMT
- Title: SPADES: A Realistic Spacecraft Pose Estimation Dataset using Event
Sensing
- Authors: Arunkumar Rathinam, Haytam Qadadri and Djamila Aouada
- Abstract summary: Due to limited access to real target datasets, algorithms are often trained using synthetic data and applied in the real domain.
Event sensing has been explored in the past and shown to reduce the domain gap between simulations and real-world scenarios.
We introduce a novel dataset, SPADES, comprising real event data acquired in a controlled laboratory environment and simulated event data using the same camera intrinsics.
- Score: 9.583223655096077
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In recent years, there has been a growing demand for improved autonomy for
in-orbit operations such as rendezvous, docking, and proximity maneuvers,
leading to increased interest in employing Deep Learning-based Spacecraft Pose
Estimation techniques. However, due to limited access to real target datasets,
algorithms are often trained using synthetic data and applied in the real
domain, resulting in a performance drop due to the domain gap. State-of-the-art
approaches employ Domain Adaptation techniques to mitigate this issue. In the
search for viable solutions, event sensing has been explored in the past and
shown to reduce the domain gap between simulations and real-world scenarios.
Event sensors have made significant advancements in hardware and software in
recent years. Moreover, the characteristics of the event sensor offer several
advantages in space applications compared to RGB sensors. To facilitate further
training and evaluation of DL-based models, we introduce a novel dataset,
SPADES, comprising real event data acquired in a controlled laboratory
environment and simulated event data using the same camera intrinsics.
Furthermore, we propose an effective data filtering method to improve the
quality of training data, thus enhancing model performance. Additionally, we
introduce an image-based event representation that outperforms existing
representations. A multifaceted baseline evaluation was conducted using
different event representations, event filtering strategies, and algorithmic
frameworks, and the results are summarized. The dataset will be made available
at http://cvi2.uni.lu/spades.
Related papers
- How Important are Data Augmentations to Close the Domain Gap for Object Detection in Orbit? [15.550663626482903]
We investigate the efficacy of data augmentations to close the domain gap in spaceborne computer vision.
We propose two novel data augmentations specifically developed to emulate the visual effects observed in orbital imagery.
arXiv Detail & Related papers (2024-10-21T08:24:46Z) - Quanv4EO: Empowering Earth Observation by means of Quanvolutional Neural Networks [62.12107686529827]
This article highlights a significant shift towards leveraging quantum computing techniques in processing large volumes of remote sensing data.
The proposed Quanv4EO model introduces a quanvolution method for preprocessing multi-dimensional EO data.
Key findings suggest that the proposed model not only maintains high precision in image classification but also shows improvements of around 5% in EO use cases.
arXiv Detail & Related papers (2024-07-24T09:11:34Z) - Synthetic-to-Real Domain Adaptation for Action Recognition: A Dataset and Baseline Performances [76.34037366117234]
We introduce a new dataset called Robot Control Gestures (RoCoG-v2)
The dataset is composed of both real and synthetic videos from seven gesture classes.
We present results using state-of-the-art action recognition and domain adaptation algorithms.
arXiv Detail & Related papers (2023-03-17T23:23:55Z) - A New Benchmark: On the Utility of Synthetic Data with Blender for Bare
Supervised Learning and Downstream Domain Adaptation [42.2398858786125]
Deep learning in computer vision has achieved great success with the price of large-scale labeled training data.
The uncontrollable data collection process produces non-IID training and test data, where undesired duplication may exist.
To circumvent them, an alternative is to generate synthetic data via 3D rendering with domain randomization.
arXiv Detail & Related papers (2023-03-16T09:03:52Z) - One-Shot Domain Adaptive and Generalizable Semantic Segmentation with
Class-Aware Cross-Domain Transformers [96.51828911883456]
Unsupervised sim-to-real domain adaptation (UDA) for semantic segmentation aims to improve the real-world test performance of a model trained on simulated data.
Traditional UDA often assumes that there are abundant unlabeled real-world data samples available during training for the adaptation.
We explore the one-shot unsupervised sim-to-real domain adaptation (OSUDA) and generalization problem, where only one real-world data sample is available.
arXiv Detail & Related papers (2022-12-14T15:54:15Z) - Towards Bridging the Space Domain Gap for Satellite Pose Estimation
using Event Sensing [35.467052373502575]
Event sensing offers a promising solution to generalise from the simulation to the target domain under stark illumination differences.
Our main contribution is an event-based satellite pose estimation technique, trained purely on synthetic data.
Results on the dataset showed that our event-based satellite pose estimation method, trained only on synthetic data without adaptation, could generalise to the target domain effectively.
arXiv Detail & Related papers (2022-09-24T07:22:09Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z) - DA4Event: towards bridging the Sim-to-Real Gap for Event Cameras using
Domain Adaptation [22.804074390795734]
Event cameras capture pixel-level intensity changes in the form of "events"
The novelty of these sensors results in the lack of a large amount of training data capable of unlocking their potential.
We propose a novel architecture, which better exploits the peculiarities of frame-based event representations.
arXiv Detail & Related papers (2021-03-23T18:09:20Z) - ePointDA: An End-to-End Simulation-to-Real Domain Adaptation Framework
for LiDAR Point Cloud Segmentation [111.56730703473411]
Training deep neural networks (DNNs) on LiDAR data requires large-scale point-wise annotations.
Simulation-to-real domain adaptation (SRDA) trains a DNN using unlimited synthetic data with automatically generated labels.
ePointDA consists of three modules: self-supervised dropout noise rendering, statistics-invariant and spatially-adaptive feature alignment, and transferable segmentation learning.
arXiv Detail & Related papers (2020-09-07T23:46:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.