CNN-based local features for navigation near an asteroid
- URL: http://arxiv.org/abs/2309.11156v2
- Date: Mon, 26 Feb 2024 13:17:09 GMT
- Title: CNN-based local features for navigation near an asteroid
- Authors: Olli Knuuttila, Antti Kestil\"a, Esa Kallio
- Abstract summary: This article addresses the challenge of vision-based proximity navigation in asteroid exploration missions and on-orbit servicing.
Traditional feature extraction methods struggle with the significant appearance variations of asteroids due to limited scattered light.
We propose a lightweight feature extractor specifically tailored for asteroid proximity navigation, designed to be robust to illumination changes and affine transformations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This article addresses the challenge of vision-based proximity navigation in
asteroid exploration missions and on-orbit servicing. Traditional feature
extraction methods struggle with the significant appearance variations of
asteroids due to limited scattered light. To overcome this, we propose a
lightweight feature extractor specifically tailored for asteroid proximity
navigation, designed to be robust to illumination changes and affine
transformations. We compare and evaluate state-of-the-art feature extraction
networks and three lightweight network architectures in the asteroid context.
Our proposed feature extractors and their evaluation leverages both synthetic
images and real-world data from missions such as NEAR Shoemaker, Hayabusa,
Rosetta, and OSIRIS-REx. Our contributions include a trained feature extractor,
incremental improvements over existing methods, and a pipeline for training
domain-specific feature extractors. Experimental results demonstrate the
effectiveness of our approach in achieving accurate navigation and
localization. This work aims to advance the field of asteroid navigation and
provides insights for future research in this domain.
Related papers
- MUFASA: Multi-View Fusion and Adaptation Network with Spatial Awareness for Radar Object Detection [3.1212590312985986]
sparsity of radar point clouds poses challenges in achieving precise object detection.
This paper introduces a comprehensive feature extraction method for radar point clouds.
We achieve state-of-the-art results among radar-based methods on the VoD dataset with the mAP of 50.24%.
arXiv Detail & Related papers (2024-08-01T13:52:18Z) - Evaluation of Resource-Efficient Crater Detectors on Embedded Systems [40.72690694162952]
Real-time analysis of Martian craters is crucial for mission-critical operations.
We benchmark several YOLO networks using a Mars craters dataset.
We optimize this process for a new wave of cost-effective, commercial-off-the-shelf-based smaller satellites.
arXiv Detail & Related papers (2024-05-27T08:45:57Z) - Federated Multi-Agent Mapping for Planetary Exploration [0.4143603294943439]
We propose an approach to jointly train a centralized map model across agents without the need to share raw data.
Our approach leverages implicit neural mapping to generate parsimonious and adaptable representations.
We demonstrate the efficacy of our proposed federated mapping approach using Martian terrains and glacier datasets.
arXiv Detail & Related papers (2024-04-02T20:32:32Z) - CATSNet: a context-aware network for Height Estimation in a Forested Area based on Pol-TomoSAR data [4.9793121278328]
This work defines a context-aware deep learning-based solution named CATSNet.
A convolutional neural network is considered to leverage patch-based information and extract features from a neighborhood rather than focus on a single pixel.
The experimental results show striking advantages in both performance and ability by leveraging context information within Multiple Baselines (MB) TomoSAR data across different polarimetric modalities, surpassing existing techniques.
arXiv Detail & Related papers (2024-03-29T16:27:40Z) - Efficient Real-time Smoke Filtration with 3D LiDAR for Search and Rescue
with Autonomous Heterogeneous Robotic Systems [56.838297900091426]
Smoke and dust affect the performance of any mobile robotic platform due to their reliance on onboard perception systems.
This paper proposes a novel modular computation filtration pipeline based on intensity and spatial information.
arXiv Detail & Related papers (2023-08-14T16:48:57Z) - An Image Processing Pipeline for Autonomous Deep-Space Optical
Navigation [0.0]
This paper proposes an innovative pipeline for unresolved beacon recognition and line-of-sight extraction from images for autonomous interplanetary navigation.
The developed algorithm exploits the k-vector method for the non-stellar object identification and statistical likelihood to detect whether any beacon projection is visible in the image.
arXiv Detail & Related papers (2023-02-14T09:06:21Z) - Rethinking Drone-Based Search and Rescue with Aerial Person Detection [79.76669658740902]
The visual inspection of aerial drone footage is an integral part of land search and rescue (SAR) operations today.
We propose a novel deep learning algorithm to automate this aerial person detection (APD) task.
We present the novel Aerial Inspection RetinaNet (AIR) algorithm as the combination of these contributions.
arXiv Detail & Related papers (2021-11-17T21:48:31Z) - Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions [49.79068659889639]
Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
arXiv Detail & Related papers (2021-09-12T12:52:20Z) - SOON: Scenario Oriented Object Navigation with Graph-based Exploration [102.74649829684617]
The ability to navigate like a human towards a language-guided target from anywhere in a 3D embodied environment is one of the 'holy grail' goals of intelligent robots.
Most visual navigation benchmarks focus on navigating toward a target from a fixed starting point, guided by an elaborate set of instructions that depicts step-by-step.
This approach deviates from real-world problems in which human-only describes what the object and its surrounding look like and asks the robot to start navigation from anywhere.
arXiv Detail & Related papers (2021-03-31T15:01:04Z) - Batch Exploration with Examples for Scalable Robotic Reinforcement
Learning [63.552788688544254]
Batch Exploration with Examples (BEE) explores relevant regions of the state-space guided by a modest number of human provided images of important states.
BEE is able to tackle challenging vision-based manipulation tasks both in simulation and on a real Franka robot.
arXiv Detail & Related papers (2020-10-22T17:49:25Z) - Occupancy Anticipation for Efficient Exploration and Navigation [97.17517060585875]
We propose occupancy anticipation, where the agent uses its egocentric RGB-D observations to infer the occupancy state beyond the visible regions.
By exploiting context in both the egocentric views and top-down maps our model successfully anticipates a broader map of the environment.
Our approach is the winning entry in the 2020 Habitat PointNav Challenge.
arXiv Detail & Related papers (2020-08-21T03:16:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.