SpaceYOLO: A Human-Inspired Model for Real-time, On-board Spacecraft
Feature Detection
- URL: http://arxiv.org/abs/2302.00824v1
- Date: Thu, 2 Feb 2023 02:11:39 GMT
- Title: SpaceYOLO: A Human-Inspired Model for Real-time, On-board Spacecraft
Feature Detection
- Authors: Trupti Mahendrakar, Ryan T. White, Markus Wilde, Madhur Tiwari
- Abstract summary: Real-time, automated spacecraft feature recognition is needed to pinpoint the locations of collision hazards.
New algorithm SpaceYOLO fuses a state-of-the-art object detector YOLOv5 with a separate neural network based on human-inspired decision processes.
Performance in autonomous spacecraft detection of SpaceYOLO is compared to ordinary YOLOv5 in hardware-in-the-loop experiments.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid proliferation of non-cooperative spacecraft and space debris in
orbit has precipitated a surging demand for on-orbit servicing and space debris
removal at a scale that only autonomous missions can address, but the
prerequisite autonomous navigation and flightpath planning to safely capture an
unknown, non-cooperative, tumbling space object is an open problem. This
requires algorithms for real-time, automated spacecraft feature recognition to
pinpoint the locations of collision hazards (e.g. solar panels or antennas) and
safe docking features (e.g. satellite bodies or thrusters) so safe, effective
flightpaths can be planned. Prior work in this area reveals the performance of
computer vision models are highly dependent on the training dataset and its
coverage of scenarios visually similar to the real scenarios that occur in
deployment. Hence, the algorithm may have degraded performance under certain
lighting conditions even when the rendezvous maneuver conditions of the chaser
to the target spacecraft are the same. This work delves into how humans perform
these tasks through a survey of how aerospace engineering students experienced
with spacecraft shapes and components recognize features of the three
spacecraft: Landsat, Envisat, Anik, and the orbiter Mir. The survey reveals
that the most common patterns in the human detection process were to consider
the shape and texture of the features: antennas, solar panels, thrusters, and
satellite bodies. This work introduces a novel algorithm SpaceYOLO, which fuses
a state-of-the-art object detector YOLOv5 with a separate neural network based
on these human-inspired decision processes exploiting shape and texture.
Performance in autonomous spacecraft detection of SpaceYOLO is compared to
ordinary YOLOv5 in hardware-in-the-loop experiments under different lighting
and chaser maneuver conditions at the ORION Laboratory at Florida Tech.
Related papers
- Vision-Based Detection of Uncooperative Targets and Components on Small Satellites [6.999319023465766]
Space debris and inactive satellites pose a threat to the safety and integrity of operational spacecraft.
Recent advancements in computer vision models can be used to improve upon existing methods for tracking such uncooperative targets.
This paper introduces an autonomous detection model designed to identify and monitor these objects using learning and computer vision.
arXiv Detail & Related papers (2024-08-22T02:48:13Z) - SatSplatYOLO: 3D Gaussian Splatting-based Virtual Object Detection Ensembles for Satellite Feature Recognition [0.0]
We present an approach for mapping geometries and high-confidence detection of components of unknown, non-cooperative satellites on orbit.
We implement accelerated 3D Gaussian splatting to learn a 3D representation of the satellite, render virtual views of the target, and ensemble the YOLOv5 object detector over the virtual views.
arXiv Detail & Related papers (2024-06-04T17:54:20Z) - Characterizing Satellite Geometry via Accelerated 3D Gaussian Splatting [0.0]
We present an approach for mapping of satellites on orbit based on 3D Gaussian Splatting.
We demonstrate model training and 3D rendering performance on a hardware-in-the-loop satellite mock-up.
Our model is shown to be capable of training on-board and rendering higher quality novel views of an unknown satellite nearly 2 orders of magnitude faster than previous NeRF-based algorithms.
arXiv Detail & Related papers (2024-01-05T00:49:56Z) - Efficient Real-time Smoke Filtration with 3D LiDAR for Search and Rescue
with Autonomous Heterogeneous Robotic Systems [56.838297900091426]
Smoke and dust affect the performance of any mobile robotic platform due to their reliance on onboard perception systems.
This paper proposes a novel modular computation filtration pipeline based on intensity and spatial information.
arXiv Detail & Related papers (2023-08-14T16:48:57Z) - On the Generation of a Synthetic Event-Based Vision Dataset for
Navigation and Landing [69.34740063574921]
This paper presents a methodology for generating event-based vision datasets from optimal landing trajectories.
We construct sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility.
We demonstrate that the pipeline can generate realistic event-based representations of surface features by constructing a dataset of 500 trajectories.
arXiv Detail & Related papers (2023-08-01T09:14:20Z) - You Only Crash Once: Improved Object Detection for Real-Time,
Sim-to-Real Hazardous Terrain Detection and Classification for Autonomous
Planetary Landings [7.201292864036088]
A cheap and effective way of detecting hazardous terrain is through the use of visual cameras.
Traditional techniques for visual hazardous terrain detection focus on template matching and registration to pre-built hazard maps.
We introduce You Only Crash Once (YOCO), a deep learning-based visual hazardous terrain detection and classification technique.
arXiv Detail & Related papers (2023-03-08T21:11:51Z) - Performance Study of YOLOv5 and Faster R-CNN for Autonomous Navigation
around Non-Cooperative Targets [0.0]
This paper discusses how the combination of cameras and machine learning algorithms can achieve the relative navigation task.
The performance of two deep learning-based object detection algorithms, Faster Region-based Convolutional Neural Networks (R-CNN) and You Only Look Once (YOLOv5) is tested.
The paper discusses the path to implementing the feature recognition algorithms and towards integrating them into the spacecraft Guidance Navigation and Control system.
arXiv Detail & Related papers (2023-01-22T04:53:38Z) - Deep Learning for Real Time Satellite Pose Estimation on Low Power Edge
TPU [58.720142291102135]
In this paper we propose a pose estimation software exploiting neural network architectures.
We show how low power machine learning accelerators could enable Artificial Intelligence exploitation in space.
arXiv Detail & Related papers (2022-04-07T08:53:18Z) - Autonomous Aerial Robot for High-Speed Search and Intercept Applications [86.72321289033562]
A fully-autonomous aerial robot for high-speed object grasping has been proposed.
As an additional sub-task, our system is able to autonomously pierce balloons located in poles close to the surface.
Our approach has been validated in a challenging international competition and has shown outstanding results.
arXiv Detail & Related papers (2021-12-10T11:49:51Z) - Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions [49.79068659889639]
Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
arXiv Detail & Related papers (2021-09-12T12:52:20Z) - A Multi-UAV System for Exploration and Target Finding in Cluttered and
GPS-Denied Environments [68.31522961125589]
We propose a framework for a team of UAVs to cooperatively explore and find a target in complex GPS-denied environments with obstacles.
The team of UAVs autonomously navigates, explores, detects, and finds the target in a cluttered environment with a known map.
Results indicate that the proposed multi-UAV system has improvements in terms of time-cost, the proportion of search area surveyed, as well as successful rates for search and rescue missions.
arXiv Detail & Related papers (2021-07-19T12:54:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.