A Spacecraft Dataset for Detection, Segmentation and Parts Recognition
- URL: http://arxiv.org/abs/2106.08186v1
- Date: Tue, 15 Jun 2021 14:36:56 GMT
- Title: A Spacecraft Dataset for Detection, Segmentation and Parts Recognition
- Authors: Dung Anh Hoang and Bo Chen and Tat-Jun Chin
- Abstract summary: In this paper, we release a dataset for spacecraft detection, instance segmentation and part recognition.
The main contribution of this work is the development of the dataset using images of space stations and satellites.
We also provide evaluations with state-of-the-art methods in object detection and instance segmentation as a benchmark for the dataset.
- Score: 42.27081423489484
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Virtually all aspects of modern life depend on space technology. Thanks to
the great advancement of computer vision in general and deep learning-based
techniques in particular, over the decades, the world witnessed the growing use
of deep learning in solving problems for space applications, such as
self-driving robot, tracers, insect-like robot on cosmos and health monitoring
of spacecraft. These are just some prominent examples that has advanced space
industry with the help of deep learning. However, the success of deep learning
models requires a lot of training data in order to have decent performance,
while on the other hand, there are very limited amount of publicly available
space datasets for the training of deep learning models. Currently, there is no
public datasets for space-based object detection or instance segmentation,
partly because manually annotating object segmentation masks is very time
consuming as they require pixel-level labelling, not to mention the challenge
of obtaining images from space. In this paper, we aim to fill this gap by
releasing a dataset for spacecraft detection, instance segmentation and part
recognition. The main contribution of this work is the development of the
dataset using images of space stations and satellites, with rich annotations
including bounding boxes of spacecrafts and masks to the level of object parts,
which are obtained with a mixture of automatic processes and manual efforts. We
also provide evaluations with state-of-the-art methods in object detection and
instance segmentation as a benchmark for the dataset. The link for downloading
the proposed dataset can be found on
https://github.com/Yurushia1998/SatelliteDataset.
Related papers
- SupeRGB-D: Zero-shot Instance Segmentation in Cluttered Indoor
Environments [67.34330257205525]
In this work, we explore zero-shot instance segmentation (ZSIS) from RGB-D data to identify unseen objects in a semantic category-agnostic manner.
We present a method that uses annotated objects to learn the objectness'' of pixels and generalize to unseen object categories in cluttered indoor environments.
arXiv Detail & Related papers (2022-12-22T17:59:48Z) - AstroVision: Towards Autonomous Feature Detection and Description for
Missions to Small Bodies Using Deep Learning [14.35670544436183]
This paper introduces AstroVision, a large-scale dataset comprised of 115,970 densely annotated, real images of 16 different small bodies captured during past and ongoing missions.
We leverage AstroVision to develop a set of standardized benchmarks and conduct an exhaustive evaluation of both handcrafted and data-driven feature detection and description methods.
Next, we employ AstroVision for end-to-end training of a state-of-the-art, deep feature detection and description network and demonstrate improved performance on multiple benchmarks.
arXiv Detail & Related papers (2022-08-03T13:18:44Z) - Satellite Image Time Series Analysis for Big Earth Observation Data [50.591267188664666]
This paper describes sits, an open-source R package for satellite image time series analysis using machine learning.
We show that this approach produces high accuracy for land use and land cover maps through a case study in the Cerrado biome.
arXiv Detail & Related papers (2022-04-24T15:23:25Z) - REGRAD: A Large-Scale Relational Grasp Dataset for Safe and
Object-Specific Robotic Grasping in Clutter [52.117388513480435]
We present a new dataset named regrad to sustain the modeling of relationships among objects and grasps.
Our dataset is collected in both forms of 2D images and 3D point clouds.
Users are free to import their own object models for the generation of as many data as they want.
arXiv Detail & Related papers (2021-04-29T05:31:21Z) - DeepSatData: Building large scale datasets of satellite images for
training machine learning models [77.17638664503215]
This report presents design considerations for automatically generating satellite imagery datasets for training machine learning models.
We discuss issues faced from the point of view of deep neural network training and evaluation.
arXiv Detail & Related papers (2021-04-28T15:13:12Z) - SPARK: SPAcecraft Recognition leveraging Knowledge of Space Environment [10.068428438297563]
This paper proposes the SPARK dataset as a new unique space object multi-modal image dataset.
The SPARK dataset has been generated under a realistic space simulation environment.
It provides about 150k images per modality, RGB and depth, and 11 classes for spacecrafts and debris.
arXiv Detail & Related papers (2021-04-13T07:16:55Z) - Batch Exploration with Examples for Scalable Robotic Reinforcement
Learning [63.552788688544254]
Batch Exploration with Examples (BEE) explores relevant regions of the state-space guided by a modest number of human provided images of important states.
BEE is able to tackle challenging vision-based manipulation tasks both in simulation and on a real Franka robot.
arXiv Detail & Related papers (2020-10-22T17:49:25Z) - Auxiliary-task learning for geographic data with autoregressive
embeddings [1.4823143667165382]
We propose SXL, a method for embedding information on the autoregressive nature of spatial data directly into the learning process.
We utilize the local Moran's I, a popular measure of local spatial autocorrelation, to "nudge" the model to learn the direction and magnitude of local spatial effects.
We highlight how our method consistently improves the training of neural networks in unsupervised and supervised learning tasks.
arXiv Detail & Related papers (2020-06-18T12:16:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.