A New Dataset and Performance Benchmark for Real-time Spacecraft Segmentation in Onboard Flight Computers
- URL: http://arxiv.org/abs/2507.10775v1
- Date: Mon, 14 Jul 2025 20:02:40 GMT
- Title: A New Dataset and Performance Benchmark for Real-time Spacecraft Segmentation in Onboard Flight Computers
- Authors: Jeffrey Joan Sam, Janhavi Sathe, Nikhil Chigali, Naman Gupta, Radhey Ruparel, Yicheng Jiang, Janmajay Singh, James W. Berck, Arko Barman,
- Abstract summary: We present a new dataset of nearly 64k annotated spacecraft images that was created using real spacecraft models.<n>To mimic camera distortions and noise in real-world image acquisition, we also added different types of noise and distortion to the images.<n>The resulting models, when tested under well-defined hardware and inference time constraints, achieved a Dice score of 0.92, Hausdorff distance of 0.69, and an inference time of about 0.5 second.
- Score: 2.8519379677270997
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spacecraft deployed in outer space are routinely subjected to various forms of damage due to exposure to hazardous environments. In addition, there are significant risks to the subsequent process of in-space repairs through human extravehicular activity or robotic manipulation, incurring substantial operational costs. Recent developments in image segmentation could enable the development of reliable and cost-effective autonomous inspection systems. While these models often require large amounts of training data to achieve satisfactory results, publicly available annotated spacecraft segmentation data are very scarce. Here, we present a new dataset of nearly 64k annotated spacecraft images that was created using real spacecraft models, superimposed on a mixture of real and synthetic backgrounds generated using NASA's TTALOS pipeline. To mimic camera distortions and noise in real-world image acquisition, we also added different types of noise and distortion to the images. Finally, we finetuned YOLOv8 and YOLOv11 segmentation models to generate performance benchmarks for the dataset under well-defined hardware and inference time constraints to mimic real-world image segmentation challenges for real-time onboard applications in space on NASA's inspector spacecraft. The resulting models, when tested under these constraints, achieved a Dice score of 0.92, Hausdorff distance of 0.69, and an inference time of about 0.5 second. The dataset and models for performance benchmark are available at https://github.com/RiceD2KLab/SWiM.
Related papers
- High Performance Space Debris Tracking in Complex Skylight Backgrounds with a Large-Scale Dataset [48.32788509877459]
We propose a deep learning-based Space Debris Tracking Network(SDT-Net) to achieve highly accurate debris tracking.<n>SDT-Net effectively represents the feature of debris, enhancing the efficiency and stability of end-to-end model learning.<n>Our dataset and code will be released soon.
arXiv Detail & Related papers (2025-06-03T08:30:25Z) - XLD: A Cross-Lane Dataset for Benchmarking Novel Driving View Synthesis [84.23233209017192]
This paper presents a synthetic dataset for novel driving view synthesis evaluation.<n>It includes testing images captured by deviating from the training trajectory by $1-4$ meters.<n>We establish the first realistic benchmark for evaluating existing NVS approaches under front-only and multicamera settings.
arXiv Detail & Related papers (2024-06-26T14:00:21Z) - SPIN: Spacecraft Imagery for Navigation [10.306879210363512]
The scarcity of data acquired under actual space operational conditions poses a significant challenge for developing learning-based visual navigation algorithms.<n>We present SPIN, an open-source tool designed to support a wide range of visual navigation scenarios in space.<n>SPIN provides multiple modalities of ground-truth data and allows researchers to employ custom 3D models of satellites.
arXiv Detail & Related papers (2024-06-11T17:35:39Z) - LiveHPS: LiDAR-based Scene-level Human Pose and Shape Estimation in Free
Environment [59.320414108383055]
We present LiveHPS, a novel single-LiDAR-based approach for scene-level human pose and shape estimation.
We propose a huge human motion dataset, named FreeMotion, which is collected in various scenarios with diverse human poses.
arXiv Detail & Related papers (2024-02-27T03:08:44Z) - LROC-PANGU-GAN: Closing the Simulation Gap in Learning Crater
Segmentation with Planetary Simulators [5.667566032625522]
It is critical for probes landing on foreign planetary bodies to be able to robustly identify and avoid hazards.
Recent applications of deep learning to this problem show promising results.
These models are, however, often learned with explicit supervision over annotated datasets.
This paper introduces a system to close this "realism" gap while retaining label fidelity.
arXiv Detail & Related papers (2023-10-04T12:52:38Z) - On the Generation of a Synthetic Event-Based Vision Dataset for
Navigation and Landing [69.34740063574921]
This paper presents a methodology for generating event-based vision datasets from optimal landing trajectories.
We construct sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility.
We demonstrate that the pipeline can generate realistic event-based representations of surface features by constructing a dataset of 500 trajectories.
arXiv Detail & Related papers (2023-08-01T09:14:20Z) - A Simulation-Augmented Benchmarking Framework for Automatic RSO Streak
Detection in Single-Frame Space Images [7.457841062817294]
Deep convolutional neural networks (DCNNs) have shown superior performance in object detection when large-scale datasets are available.
We introduce a novel simulation-augmented benchmarking framework for RSO detection (SAB-RSOD)
In our framework, by making the best use of the hardware parameters of the sensor that captures real-world space images, we first develop a high-fidelity RSO simulator.
Then, we use this simulator to generate images that contain diversified RSOs in space and annotate them automatically.
arXiv Detail & Related papers (2023-04-30T07:00:16Z) - Synthetic Data for Semantic Image Segmentation of Imagery of Unmanned
Spacecraft [0.0]
Images of spacecraft photographed from other spacecraft operating in outer space are difficult to come by.
We propose a method for generating synthetic image data labelled for semantic segmentation, generalizable to other tasks.
We present a strong benchmark result on these synthetic data, suggesting that it is feasible to train well-performing image segmentation models for this task.
arXiv Detail & Related papers (2022-11-22T01:30:40Z) - Learning Dynamic View Synthesis With Few RGBD Cameras [60.36357774688289]
We propose to utilize RGBD cameras to synthesize free-viewpoint videos of dynamic indoor scenes.
We generate point clouds from RGBD frames and then render them into free-viewpoint videos via a neural feature.
We introduce a simple Regional Depth-Inpainting module that adaptively inpaints missing depth values to render complete novel views.
arXiv Detail & Related papers (2022-04-22T03:17:35Z) - Deep Learning for Real Time Satellite Pose Estimation on Low Power Edge
TPU [58.720142291102135]
In this paper we propose a pose estimation software exploiting neural network architectures.
We show how low power machine learning accelerators could enable Artificial Intelligence exploitation in space.
arXiv Detail & Related papers (2022-04-07T08:53:18Z) - SPEED+: Next Generation Dataset for Spacecraft Pose Estimation across
Domain Gap [0.9449650062296824]
This paper introduces SPEED+: the next generation spacecraft pose estimation dataset with specific emphasis on domain gap.
SPEED+ includes 9,531 simulated images of a spacecraft mockup model captured from the Testbed for Rendezvous and Optical Navigation (TRON) facility.
TRON is a first-of-a-kind robotic testbed capable of capturing an arbitrary number of target images with accurate and maximally diverse pose labels.
arXiv Detail & Related papers (2021-10-06T23:22:24Z) - A Spacecraft Dataset for Detection, Segmentation and Parts Recognition [42.27081423489484]
In this paper, we release a dataset for spacecraft detection, instance segmentation and part recognition.
The main contribution of this work is the development of the dataset using images of space stations and satellites.
We also provide evaluations with state-of-the-art methods in object detection and instance segmentation as a benchmark for the dataset.
arXiv Detail & Related papers (2021-06-15T14:36:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.