Synthetic Data for Semantic Image Segmentation of Imagery of Unmanned
Spacecraft
- URL: http://arxiv.org/abs/2211.11941v1
- Date: Tue, 22 Nov 2022 01:30:40 GMT
- Title: Synthetic Data for Semantic Image Segmentation of Imagery of Unmanned
Spacecraft
- Authors: William S. Armstrong, Spencer Drakontaidis, Nicholas Lui
- Abstract summary: Images of spacecraft photographed from other spacecraft operating in outer space are difficult to come by.
We propose a method for generating synthetic image data labelled for semantic segmentation, generalizable to other tasks.
We present a strong benchmark result on these synthetic data, suggesting that it is feasible to train well-performing image segmentation models for this task.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Images of spacecraft photographed from other spacecraft operating in outer
space are difficult to come by, especially at a scale typically required for
deep learning tasks. Semantic image segmentation, object detection and
localization, and pose estimation are well researched areas with powerful
results for many applications, and would be very useful in autonomous
spacecraft operation and rendezvous. However, recent studies show that these
strong results in broad and common domains may generalize poorly even to
specific industrial applications on earth. To address this, we propose a method
for generating synthetic image data that are labelled for semantic
segmentation, generalizable to other tasks, and provide a prototype synthetic
image dataset consisting of 2D monocular images of unmanned spacecraft, in
order to enable further research in the area of autonomous spacecraft
rendezvous. We also present a strong benchmark result (S{\o}rensen-Dice
coefficient 0.8723) on these synthetic data, suggesting that it is feasible to
train well-performing image segmentation models for this task, especially if
the target spacecraft and its configuration are known.
Related papers
- SatSynth: Augmenting Image-Mask Pairs through Diffusion Models for Aerial Semantic Segmentation [69.42764583465508]
We explore the potential of generative image diffusion to address the scarcity of annotated data in earth observation tasks.
To the best of our knowledge, we are the first to generate both images and corresponding masks for satellite segmentation.
arXiv Detail & Related papers (2024-03-25T10:30:22Z) - Rotated Multi-Scale Interaction Network for Referring Remote Sensing Image Segmentation [63.15257949821558]
Referring Remote Sensing Image (RRSIS) is a new challenge that combines computer vision and natural language processing.
Traditional Referring Image (RIS) approaches have been impeded by the complex spatial scales and orientations found in aerial imagery.
We introduce the Rotated Multi-Scale Interaction Network (RMSIN), an innovative approach designed for the unique demands of RRSIS.
arXiv Detail & Related papers (2023-12-19T08:14:14Z) - LARD - Landing Approach Runway Detection -- Dataset for Vision Based
Landing [2.7400353551392853]
We present a dataset of high-quality aerial images for the task of runway detection during approach and landing phases.
Most of the dataset is composed of synthetic images but we also provide manually labelled images from real landing footages.
This dataset paves the way for further research such as the analysis of dataset quality or the development of models to cope with the detection tasks.
arXiv Detail & Related papers (2023-04-05T08:25:55Z) - Semantic Segmentation of Vegetation in Remote Sensing Imagery Using Deep
Learning [77.34726150561087]
We propose an approach for creating a multi-modal and large-temporal dataset comprised of publicly available Remote Sensing data.
We use Convolutional Neural Networks (CNN) models that are capable of separating different classes of vegetation.
arXiv Detail & Related papers (2022-09-28T18:51:59Z) - Incorporating Texture Information into Dimensionality Reduction for
High-Dimensional Images [65.74185962364211]
We present a method for incorporating neighborhood information into distance-based dimensionality reduction methods.
Based on a classification of different methods for comparing image patches, we explore a number of different approaches.
arXiv Detail & Related papers (2022-02-18T13:17:43Z) - SPEED+: Next Generation Dataset for Spacecraft Pose Estimation across
Domain Gap [0.9449650062296824]
This paper introduces SPEED+: the next generation spacecraft pose estimation dataset with specific emphasis on domain gap.
SPEED+ includes 9,531 simulated images of a spacecraft mockup model captured from the Testbed for Rendezvous and Optical Navigation (TRON) facility.
TRON is a first-of-a-kind robotic testbed capable of capturing an arbitrary number of target images with accurate and maximally diverse pose labels.
arXiv Detail & Related papers (2021-10-06T23:22:24Z) - A Spacecraft Dataset for Detection, Segmentation and Parts Recognition [42.27081423489484]
In this paper, we release a dataset for spacecraft detection, instance segmentation and part recognition.
The main contribution of this work is the development of the dataset using images of space stations and satellites.
We also provide evaluations with state-of-the-art methods in object detection and instance segmentation as a benchmark for the dataset.
arXiv Detail & Related papers (2021-06-15T14:36:56Z) - Low-Rank Subspaces in GANs [101.48350547067628]
This work introduces low-rank subspaces that enable more precise control of GAN generation.
LowRankGAN is able to find the low-dimensional representation of attribute manifold.
Experiments on state-of-the-art GAN models (including StyleGAN2 and BigGAN) trained on various datasets demonstrate the effectiveness of our LowRankGAN.
arXiv Detail & Related papers (2021-06-08T16:16:32Z) - SPARK: SPAcecraft Recognition leveraging Knowledge of Space Environment [10.068428438297563]
This paper proposes the SPARK dataset as a new unique space object multi-modal image dataset.
The SPARK dataset has been generated under a realistic space simulation environment.
It provides about 150k images per modality, RGB and depth, and 11 classes for spacecrafts and debris.
arXiv Detail & Related papers (2021-04-13T07:16:55Z) - Generating Synthetic Multispectral Satellite Imagery from Sentinel-2 [3.4797121357690153]
We propose a generative model to produce multi-resolution multi-spectral imagery based on Sentinel-2 data.
The resulting synthetic images are indistinguishable from real ones by humans.
arXiv Detail & Related papers (2020-12-05T19:41:33Z) - Unlimited Resolution Image Generation with R2D2-GANs [69.90258455164513]
We present a novel simulation technique for generating high quality images of any predefined resolution.
This method can be used to synthesize sonar scans of size equivalent to those collected during a full-length mission.
The data produced is continuous, realistically-looking, and can also be generated at least two times faster than the real speed of acquisition.
arXiv Detail & Related papers (2020-03-02T17:49:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.