An Image Processing Pipeline for Autonomous Deep-Space Optical
Navigation
- URL: http://arxiv.org/abs/2302.06918v1
- Date: Tue, 14 Feb 2023 09:06:21 GMT
- Title: An Image Processing Pipeline for Autonomous Deep-Space Optical
Navigation
- Authors: Eleonora Andreis, Paolo Panicucci, Francesco Topputo
- Abstract summary: This paper proposes an innovative pipeline for unresolved beacon recognition and line-of-sight extraction from images for autonomous interplanetary navigation.
The developed algorithm exploits the k-vector method for the non-stellar object identification and statistical likelihood to detect whether any beacon projection is visible in the image.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A new era of space exploration and exploitation is fast approaching. A
multitude of spacecraft will flow in the future decades under the propulsive
momentum of the new space economy. Yet, the flourishing proliferation of
deep-space assets will make it unsustainable to pilot them from ground with
standard radiometric tracking. The adoption of autonomous navigation
alternatives is crucial to overcoming these limitations. Among these, optical
navigation is an affordable and fully ground-independent approach. Probes can
triangulate their position by observing visible beacons, e.g., planets or
asteroids, by acquiring their line-of-sight in deep space. To do so, developing
efficient and robust image processing algorithms providing information to
navigation filters is a necessary action. This paper proposes an innovative
pipeline for unresolved beacon recognition and line-of-sight extraction from
images for autonomous interplanetary navigation. The developed algorithm
exploits the k-vector method for the non-stellar object identification and
statistical likelihood to detect whether any beacon projection is visible in
the image. Statistical results show that the accuracy in detecting the planet
position projection is independent of the spacecraft position uncertainty.
Whereas, the planet detection success rate is higher than 95% when the
spacecraft position is known with a 3sigma accuracy up to 10^5 km.
Related papers
- Angle Robustness Unmanned Aerial Vehicle Navigation in GNSS-Denied
Scenarios [66.05091704671503]
We present a novel angle navigation paradigm to deal with flight deviation in point-to-point navigation tasks.
We also propose a model that includes the Adaptive Feature Enhance Module, Cross-knowledge Attention-guided Module and Robust Task-oriented Head Module.
arXiv Detail & Related papers (2024-02-04T08:41:20Z) - An Autonomous Vision-Based Algorithm for Interplanetary Navigation [0.0]
Vision-based navigation algorithm is built by combining an orbit determination method with an image processing pipeline.
A novel analytical measurement model is developed providing a first-order approximation of the light-aberration and light-time effects.
Algorithm performance is tested on a high-fidelity, Earth--Mars interplanetary transfer.
arXiv Detail & Related papers (2023-09-18T08:54:29Z) - Efficient Real-time Smoke Filtration with 3D LiDAR for Search and Rescue
with Autonomous Heterogeneous Robotic Systems [56.838297900091426]
Smoke and dust affect the performance of any mobile robotic platform due to their reliance on onboard perception systems.
This paper proposes a novel modular computation filtration pipeline based on intensity and spatial information.
arXiv Detail & Related papers (2023-08-14T16:48:57Z) - SpaceYOLO: A Human-Inspired Model for Real-time, On-board Spacecraft
Feature Detection [0.0]
Real-time, automated spacecraft feature recognition is needed to pinpoint the locations of collision hazards.
New algorithm SpaceYOLO fuses a state-of-the-art object detector YOLOv5 with a separate neural network based on human-inspired decision processes.
Performance in autonomous spacecraft detection of SpaceYOLO is compared to ordinary YOLOv5 in hardware-in-the-loop experiments.
arXiv Detail & Related papers (2023-02-02T02:11:39Z) - Incremental 3D Scene Completion for Safe and Efficient Exploration
Mapping and Planning [60.599223456298915]
We propose a novel way to integrate deep learning into exploration by leveraging 3D scene completion for informed, safe, and interpretable mapping and planning.
We show that our method can speed up coverage of an environment by 73% compared to the baselines with only minimal reduction in map accuracy.
Even if scene completions are not included in the final map, we show that they can be used to guide the robot to choose more informative paths, speeding up the measurement of the scene with the robot's sensors by 35%.
arXiv Detail & Related papers (2022-08-17T14:19:33Z) - Deep Learning for Real Time Satellite Pose Estimation on Low Power Edge
TPU [58.720142291102135]
In this paper we propose a pose estimation software exploiting neural network architectures.
We show how low power machine learning accelerators could enable Artificial Intelligence exploitation in space.
arXiv Detail & Related papers (2022-04-07T08:53:18Z) - Lunar Rover Localization Using Craters as Landmarks [7.097834331171584]
We present an approach to crater-based lunar rover localization using 3D point cloud data from onboard lidar or stereo cameras, as well as using shading cues in monocular onboard imagery.
This paper presents initial results on crater detection using 3D point cloud data from onboard lidar or stereo cameras, as well as using shading cues in monocular onboard imagery.
arXiv Detail & Related papers (2022-03-18T17:38:52Z) - Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions [49.79068659889639]
Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
arXiv Detail & Related papers (2021-09-12T12:52:20Z) - Safe Vessel Navigation Visually Aided by Autonomous Unmanned Aerial
Vehicles in Congested Harbors and Waterways [9.270928705464193]
This work is the first attempt to detect and estimate distances to unknown objects from long-range visual data captured with conventional RGB cameras and auxiliary absolute positioning systems (e.g. GPS)
The simulation results illustrate the accuracy and efficacy of the proposed method for visually aided navigation of vessels assisted by UAV.
arXiv Detail & Related papers (2021-08-09T08:15:17Z) - Occupancy Anticipation for Efficient Exploration and Navigation [97.17517060585875]
We propose occupancy anticipation, where the agent uses its egocentric RGB-D observations to infer the occupancy state beyond the visible regions.
By exploiting context in both the egocentric views and top-down maps our model successfully anticipates a broader map of the environment.
Our approach is the winning entry in the 2020 Habitat PointNav Challenge.
arXiv Detail & Related papers (2020-08-21T03:16:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.