Design of a Visual Pose Estimation Algorithm for Moon Landing
- URL: http://arxiv.org/abs/2502.14942v1
- Date: Thu, 20 Feb 2025 17:37:55 GMT
- Title: Design of a Visual Pose Estimation Algorithm for Moon Landing
- Authors: Atakan Süslü, Betül Rana Kuran, Halil Ersin Söken,
- Abstract summary: A terrain absolute navigation method to estimate the spacecraft's position and attitude is proposed.<n>Craters seen by the camera onboard the spacecraft are detected and identified using a crater database known beforehand.<n>The accuracy of the algorithm and the effect of the crater number used for estimation are inspected by performing simulations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In order to make a pinpoint landing on the Moon, the spacecraft's navigation system must be accurate. To achieve the desired accuracy, navigational drift caused by the inertial sensors must be corrected. One way to correct this drift is to use absolute navigation solutions. In this study, a terrain absolute navigation method to estimate the spacecraft's position and attitude is proposed. This algorithm uses the position of the craters below the spacecraft for estimation. Craters seen by the camera onboard the spacecraft are detected and identified using a crater database known beforehand. In order to focus on estimation algorithms, image processing and crater matching steps are skipped. The accuracy of the algorithm and the effect of the crater number used for estimation are inspected by performing simulations.
Related papers
- ROLO-SLAM: Rotation-Optimized LiDAR-Only SLAM in Uneven Terrain with Ground Vehicle [49.61982102900982]
A LiDAR-based SLAM method is presented to improve the accuracy of pose estimations for ground vehicles in rough terrains.<n>A global-scale factor graph is established to aid in the reduction of cumulative errors.<n>The results demonstrate that ROLO-SLAM excels in pose estimation of ground vehicles and outperforms existing state-of-the-art LiDAR SLAM frameworks.
arXiv Detail & Related papers (2025-01-04T02:44:27Z) - LuSNAR:A Lunar Segmentation, Navigation and Reconstruction Dataset based on Muti-sensor for Autonomous Exploration [2.3011380360879237]
Environmental perception and navigation algorithms are the foundation for lunar rovers.
Most of the existing lunar datasets are targeted at a single task.
We propose a multi-task, multi-scene, and multi-label lunar benchmark dataset LuSNAR.
arXiv Detail & Related papers (2024-07-09T02:47:58Z) - A Bionic Data-driven Approach for Long-distance Underwater Navigation with Anomaly Resistance [59.21686775951903]
Various animals exhibit accurate navigation using environment cues.
Inspired by animal navigation, this work proposes a bionic and data-driven approach for long-distance underwater navigation.
The proposed approach uses measured geomagnetic data for the navigation, and requires no GPS systems or geographical maps.
arXiv Detail & Related papers (2024-02-06T13:20:56Z) - Angle Robustness Unmanned Aerial Vehicle Navigation in GNSS-Denied
Scenarios [66.05091704671503]
We present a novel angle navigation paradigm to deal with flight deviation in point-to-point navigation tasks.
We also propose a model that includes the Adaptive Feature Enhance Module, Cross-knowledge Attention-guided Module and Robust Task-oriented Head Module.
arXiv Detail & Related papers (2024-02-04T08:41:20Z) - An Autonomous Vision-Based Algorithm for Interplanetary Navigation [0.0]
Vision-based navigation algorithm is built by combining an orbit determination method with an image processing pipeline.
A novel analytical measurement model is developed providing a first-order approximation of the light-aberration and light-time effects.
Algorithm performance is tested on a high-fidelity, Earth--Mars interplanetary transfer.
arXiv Detail & Related papers (2023-09-18T08:54:29Z) - An Image Processing Pipeline for Autonomous Deep-Space Optical
Navigation [0.0]
This paper proposes an innovative pipeline for unresolved beacon recognition and line-of-sight extraction from images for autonomous interplanetary navigation.
The developed algorithm exploits the k-vector method for the non-stellar object identification and statistical likelihood to detect whether any beacon projection is visible in the image.
arXiv Detail & Related papers (2023-02-14T09:06:21Z) - Construction of Object Boundaries for the Autopilotof a Surface Robot
from Satellite Imagesusing Computer Vision Methods [101.18253437732933]
A method for detecting water objects on satellite maps is proposed.
An algorithm for calculating the GPS coordinates of the contours is created.
The proposed algorithm allows saving the result in a format suitable for the surface robot autopilot module.
arXiv Detail & Related papers (2022-12-05T12:07:40Z) - Beyond Cross-view Image Retrieval: Highly Accurate Vehicle Localization
Using Satellite Image [91.29546868637911]
This paper addresses the problem of vehicle-mounted camera localization by matching a ground-level image with an overhead-view satellite map.
The key idea is to formulate the task as pose estimation and solve it by neural-net based optimization.
Experiments on standard autonomous vehicle localization datasets have confirmed the superiority of the proposed method.
arXiv Detail & Related papers (2022-04-10T19:16:58Z) - The Unsupervised Method of Vessel Movement Trajectory Prediction [1.2617078020344619]
This article presents an unsupervised method of ship movement trajectory prediction.
It represents the data in a three-dimensional space which consists of time difference between points, the scaled error distance between the tested and its predicted forward and backward locations, and the space-time angle.
Unlike most statistical learning or deep learning methods, the proposed clustering-based trajectory reconstruction method does not require computationally expensive model training.
arXiv Detail & Related papers (2020-07-27T17:45:21Z) - Lunar Terrain Relative Navigation Using a Convolutional Neural Network
for Visual Crater Detection [39.20073801639923]
This paper presents a system that uses a convolutional neural network (CNN) and image processing methods to track the location of a simulated spacecraft.
The CNN, called LunaNet, visually detects craters in the simulated camera frame and those detections are matched to known lunar craters in the region of the current estimated spacecraft position.
arXiv Detail & Related papers (2020-07-15T14:19:27Z) - Refined Plane Segmentation for Cuboid-Shaped Objects by Leveraging Edge
Detection [63.942632088208505]
We propose a post-processing algorithm to align the segmented plane masks with edges detected in the image.
This allows us to increase the accuracy of state-of-the-art approaches, while limiting ourselves to cuboid-shaped objects.
arXiv Detail & Related papers (2020-03-28T18:51:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.