Toward Onboard AI-Enabled Solutions to Space Object Detection for Space Sustainability
- URL: http://arxiv.org/abs/2505.01650v1
- Date: Sat, 03 May 2025 01:56:52 GMT
- Title: Toward Onboard AI-Enabled Solutions to Space Object Detection for Space Sustainability
- Authors: Wenxuan Zhang, Peng Hu,
- Abstract summary: This paper investigates the feasibility and effectiveness of employing vision sensors for space object detection.<n>It introduces models based on the Squeeze-and-Excitation (SE) layer, Vision Transformer (ViT) and the Generalized Efficient Layer Aggregation Network (GELAN)<n> Experimental results show that the proposed models achieve mean average precision at intersection over union threshold 0.5 (mAP50) scores of up to 0.751 and mean average precision averaged over intersection over union thresholds from 0.5 to 0.95 (mAP50:95) scores of up to 0.280.
- Score: 29.817805350971366
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid expansion of advanced low-Earth orbit (LEO) satellites in large constellations is positioning space assets as key to the future, enabling global internet access and relay systems for deep space missions. A solution to the challenge is effective space object detection (SOD) for collision assessment and avoidance. In SOD, an LEO satellite must detect other satellites and objects with high precision and minimal delay. This paper investigates the feasibility and effectiveness of employing vision sensors for SOD tasks based on deep learning (DL) models. It introduces models based on the Squeeze-and-Excitation (SE) layer, Vision Transformer (ViT), and the Generalized Efficient Layer Aggregation Network (GELAN) and evaluates their performance under SOD scenarios. Experimental results show that the proposed models achieve mean average precision at intersection over union threshold 0.5 (mAP50) scores of up to 0.751 and mean average precision averaged over intersection over union thresholds from 0.5 to 0.95 (mAP50:95) scores of up to 0.280. Compared to the baseline GELAN-t model, the proposed GELAN-ViT-SE model increases the average mAP50 from 0.721 to 0.751, improves the mAP50:95 from 0.266 to 0.274, reduces giga floating point operations (GFLOPs) from 7.3 to 5.6, and lowers peak power consumption from 2080.7 mW to 2028.7 mW by 2.5\%.
Related papers
- VANGUARD: Vehicle-Anchored Ground Sample Distance Estimation for UAVs in GPS-Denied Environments [7.390183878674011]
VANGUARD is a lightweight, deterministic Geometric Perception Skill designed as a callable tool for aerial robots.<n>On the DOTAv1.5 benchmark, VANGUARD achieves 6.87% median GSD error on 306images.<n> Integrated with SAM-based segmentation for downstream area measurement, the pipeline yields 19.7% median error on a 100-entry benchmark.
arXiv Detail & Related papers (2026-03-04T16:59:08Z) - YOLO-DS: Fine-Grained Feature Decoupling via Dual-Statistic Synergy Operator for Object Detection [55.58092342624062]
We propose YOLO-DS, a framework built around a novel Dual-Statistic Synergy Operator (DSO)<n>YOLO-DS decouples object features by jointly modeling the channel-wise mean and the peak-to-mean difference.<n>On the MS-COCO benchmark, YOLO-DS consistently outperforms YOLOv8 across five model scales.
arXiv Detail & Related papers (2026-01-26T05:50:32Z) - YOLO-ROC: A High-Precision and Ultra-Lightweight Model for Real-Time Road Damage Detection [0.0]
Road damage detection is a critical task for ensuring traffic safety and maintaining infrastructure integrity.<n>This paper proposes a high-precision and lightweight model, YOLO - Road Orthogonal Compact (YOLO-ROC)
arXiv Detail & Related papers (2025-07-31T03:35:19Z) - An Edge AI Solution for Space Object Detection [29.817805350971366]
We propose an Edge AI solution based on deep-learning-based vision sensing for space object detection tasks.<n>We evaluate the performance of these models across various realistic space object detection scenarios.
arXiv Detail & Related papers (2025-05-08T14:51:19Z) - YOLO-LLTS: Real-Time Low-Light Traffic Sign Detection via Prior-Guided Enhancement and Multi-Branch Feature Interaction [45.79993863157494]
YOLO-LLTS is an end-to-end real-time traffic sign detection algorithm specifically designed for low-light environments.<n>We introduce the High-Resolution Feature Map for Small Object Detection (HRFM-TOD) module to address indistinct small-object features in low-light scenarios.<n> Secondly, we develop the Multi-branch Feature Interaction Attention (MFIA) module, which facilitates deep feature interaction across multiple receptive fields.
arXiv Detail & Related papers (2025-03-18T04:28:05Z) - Foreign-Object Detection in High-Voltage Transmission Line Based on Improved YOLOv8m [19.080692737423693]
This paper proposes an improved YOLOv8m-based model for detecting foreign objects on transmission lines.<n> Experiments are conducted on a dataset collected from Yunnan Power Grid.
arXiv Detail & Related papers (2025-02-11T01:58:32Z) - Sensing for Space Safety and Sustainability: A Deep Learning Approach with Vision Transformers [29.817805350971366]
This paper discusses the satellite object detection (SOD) tasks and onboard deep learning (DL) approach to the tasks.<n>Two new DL models are proposed, called GELAN-ViT and GELAN-RepViT, which incorporate vision transformer (ViT) into the Generalized Efficient Layer Aggregation Network (GELAN) architecture.<n>These models outperform the state-of-the-art YOLOv9-t in terms of mean average precision (mAP) and computational costs.
arXiv Detail & Related papers (2024-12-12T03:51:50Z) - Fall Detection for Industrial Setups Using YOLOv8 Variants [0.0]
The YOLOv8m model, consisting of 25.9 million parameters and 79.1 GFLOPs, demonstrated a respectable balance between computational efficiency and detection performance.
Although the YOLOv8l and YOLOv8x models presented higher precision and recall, their higher computational demands and model size make them less suitable for resource-constrained environments.
arXiv Detail & Related papers (2024-08-08T17:24:54Z) - SOOD++: Leveraging Unlabeled Data to Boost Oriented Object Detection [68.18620488664187]
We propose a simple yet effective Semi-supervised Oriented Object Detection method termed SOOD++.<n> Specifically, we observe that objects from aerial images usually have arbitrary orientations, small scales, and dense distribution.<n>Extensive experiments conducted on various oriented object under various labeled settings demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2024-07-01T07:03:51Z) - Semantic Segmentation in Satellite Hyperspectral Imagery by Deep Learning [54.094272065609815]
We propose a lightweight 1D-CNN model, 1D-Justo-LiuNet, which outperforms state-of-the-art models in the hypespectral domain.
1D-Justo-LiuNet achieves the highest accuracy (0.93) with the smallest model size (4,563 parameters) among all tested models.
arXiv Detail & Related papers (2023-10-24T21:57:59Z) - Patch-Level Contrasting without Patch Correspondence for Accurate and
Dense Contrastive Representation Learning [79.43940012723539]
ADCLR is a self-supervised learning framework for learning accurate and dense vision representation.
Our approach achieves new state-of-the-art performance for contrastive methods.
arXiv Detail & Related papers (2023-06-23T07:38:09Z) - Ultra-low Power Deep Learning-based Monocular Relative Localization
Onboard Nano-quadrotors [64.68349896377629]
This work presents a novel autonomous end-to-end system that addresses the monocular relative localization, through deep neural networks (DNNs), of two peer nano-drones.
To cope with the ultra-constrained nano-drone platform, we propose a vertically-integrated framework, including dataset augmentation, quantization, and system optimizations.
Experimental results show that our DNN can precisely localize a 10cm-size target nano-drone by employing only low-resolution monochrome images, up to 2m distance.
arXiv Detail & Related papers (2023-03-03T14:14:08Z) - EdgeYOLO: An Edge-Real-Time Object Detector [69.41688769991482]
This paper proposes an efficient, low-complexity and anchor-free object detector based on the state-of-the-art YOLO framework.
We develop an enhanced data augmentation method to effectively suppress overfitting during training, and design a hybrid random loss function to improve the detection accuracy of small objects.
Our baseline model can reach the accuracy of 50.6% AP50:95 and 69.8% AP50 in MS 2017 dataset, 26.4% AP50:95 and 44.8% AP50 in VisDrone 2019-DET dataset, and it meets real-time requirements (FPS>=30) on edge-computing device Nvidia
arXiv Detail & Related papers (2023-02-15T06:05:14Z) - Integrating LEO Satellites and Multi-UAV Reinforcement Learning for
Hybrid FSO/RF Non-Terrestrial Networks [55.776497048509185]
A mega-constellation of low-altitude earth orbit satellites (SATs) and burgeoning unmanned aerial vehicles (UAVs) are promising enablers for high-speed and long-distance communications in beyond fifth-generation (5G) systems.
We investigate the problem of forwarding packets between two faraway ground terminals through SAT and UAV relays using either millimeter-wave (mmWave) radio-frequency (RF) or free-space optical (FSO) link.
arXiv Detail & Related papers (2020-10-20T09:07:10Z) - Integrating LEO Satellite and UAV Relaying via Reinforcement Learning
for Non-Terrestrial Networks [51.05735925326235]
A mega-constellation of low-earth orbit (LEO) satellites has the potential to enable long-range communication with low latency.
We study the problem of forwarding packets between two faraway ground terminals, through an LEO satellite selected from an orbiting constellation.
To maximize the end-to-end data rate, the satellite association and HAP location should be optimized.
We tackle this problem using deep reinforcement learning (DRL) with a novel action dimension reduction technique.
arXiv Detail & Related papers (2020-05-26T05:39:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.