Taking a PEEK into YOLOv5 for Satellite Component Recognition via
Entropy-based Visual Explanations
- URL: http://arxiv.org/abs/2311.01703v2
- Date: Sat, 25 Nov 2023 20:22:24 GMT
- Title: Taking a PEEK into YOLOv5 for Satellite Component Recognition via
Entropy-based Visual Explanations
- Authors: Mackenzie J. Meni, Trupti Mahendrakar, Olivia D. M. Raney, Ryan T.
White, Michael L. Mayo, and Kevin Pilkiewicz
- Abstract summary: This paper contributes to efforts in enabling autonomous swarms of small chaser satellites for target geometry determination.
Our research explores on-orbit use of the You Only Look Once v5 (YOLOv5) object detection model trained to detect satellite components.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The escalating risk of collisions and the accumulation of space debris in Low
Earth Orbit (LEO) has reached critical concern due to the ever increasing
number of spacecraft. Addressing this crisis, especially in dealing with
non-cooperative and unidentified space debris, is of paramount importance. This
paper contributes to efforts in enabling autonomous swarms of small chaser
satellites for target geometry determination and safe flight trajectory
planning for proximity operations in LEO. Our research explores on-orbit use of
the You Only Look Once v5 (YOLOv5) object detection model trained to detect
satellite components. While this model has shown promise, its inherent lack of
interpretability hinders human understanding, a critical aspect of validating
algorithms for use in safety-critical missions. To analyze the decision
processes, we introduce Probabilistic Explanations for Entropic Knowledge
extraction (PEEK), a method that utilizes information theoretic analysis of the
latent representations within the hidden layers of the model. Through both
synthetic in hardware-in-the-loop experiments, PEEK illuminates the
decision-making processes of the model, helping identify its strengths,
limitations and biases.
Related papers
- Vision-Based Detection of Uncooperative Targets and Components on Small Satellites [6.999319023465766]
Space debris and inactive satellites pose a threat to the safety and integrity of operational spacecraft.
Recent advancements in computer vision models can be used to improve upon existing methods for tracking such uncooperative targets.
This paper introduces an autonomous detection model designed to identify and monitor these objects using learning and computer vision.
arXiv Detail & Related papers (2024-08-22T02:48:13Z) - Markers Identification for Relative Pose Estimation of an Uncooperative Target [0.0]
This paper introduces a novel method to detect structural markers on the European Space Agency's (ESA) Environmental Satellite (ENVISAT) for safe de-orbiting.
Advanced image pre-processing techniques, including noise addition and blurring, are employed to improve marker detection accuracy and robustness.
arXiv Detail & Related papers (2024-07-30T03:20:54Z) - Physics-Informed Real NVP for Satellite Power System Fault Detection [3.3694176886084803]
This paper proposes an Artificial Intelligence (AI) based fault detection methodology and evaluates its performance on ADAPT dataset.
Our study focuses on the application of a physics-informed (PI) real-valued non-volume preserving (Real NVP) model for fault detection in space systems.
Results show that our physics-informed approach outperforms existing methods of fault detection, demonstrating its suitability for addressing the challenges of satellite EPS sub-system faults.
arXiv Detail & Related papers (2024-05-27T16:42:51Z) - OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Innovative Horizons in Aerial Imagery: LSKNet Meets DiffusionDet for
Advanced Object Detection [55.2480439325792]
We present an in-depth evaluation of an object detection model that integrates the LSKNet backbone with the DiffusionDet head.
The proposed model achieves a mean average precision (MAP) of approximately 45.7%, which is a significant improvement.
This advancement underscores the effectiveness of the proposed modifications and sets a new benchmark in aerial image analysis.
arXiv Detail & Related papers (2023-11-21T19:49:13Z) - Efficient Real-time Smoke Filtration with 3D LiDAR for Search and Rescue
with Autonomous Heterogeneous Robotic Systems [56.838297900091426]
Smoke and dust affect the performance of any mobile robotic platform due to their reliance on onboard perception systems.
This paper proposes a novel modular computation filtration pipeline based on intensity and spatial information.
arXiv Detail & Related papers (2023-08-14T16:48:57Z) - SpaceYOLO: A Human-Inspired Model for Real-time, On-board Spacecraft
Feature Detection [0.0]
Real-time, automated spacecraft feature recognition is needed to pinpoint the locations of collision hazards.
New algorithm SpaceYOLO fuses a state-of-the-art object detector YOLOv5 with a separate neural network based on human-inspired decision processes.
Performance in autonomous spacecraft detection of SpaceYOLO is compared to ordinary YOLOv5 in hardware-in-the-loop experiments.
arXiv Detail & Related papers (2023-02-02T02:11:39Z) - Towards Spatial Equilibrium Object Detection [88.9747319572368]
In this paper, we study the spatial disequilibrium problem of modern object detectors.
We propose to quantify this problem by measuring the detection performance over zones.
This motivates us to design a more generalized measurement, termed Spatial equilibrium Precision.
arXiv Detail & Related papers (2023-01-14T17:33:26Z) - Space Non-cooperative Object Active Tracking with Deep Reinforcement
Learning [1.212848031108815]
We propose an end-to-end active visual tracking method based on DQN algorithm, named as DRLAVT.
It can guide the chasing spacecraft approach to arbitrary space non-cooperative target merely relied on color or RGBD images.
It significantly outperforms position-based visual servoing baseline algorithm that adopts state-of-the-art 2D monocular tracker, SiamRPN.
arXiv Detail & Related papers (2021-12-18T06:12:24Z) - Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions [49.79068659889639]
Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
arXiv Detail & Related papers (2021-09-12T12:52:20Z) - Batch Exploration with Examples for Scalable Robotic Reinforcement
Learning [63.552788688544254]
Batch Exploration with Examples (BEE) explores relevant regions of the state-space guided by a modest number of human provided images of important states.
BEE is able to tackle challenging vision-based manipulation tasks both in simulation and on a real Franka robot.
arXiv Detail & Related papers (2020-10-22T17:49:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.