Vision-Language Model for Accurate Crater Detection
- URL: http://arxiv.org/abs/2601.07795v1
- Date: Mon, 12 Jan 2026 18:08:17 GMT
- Title: Vision-Language Model for Accurate Crater Detection
- Authors: Patrick Bauer, Marius Schwinning, Florian Renk, Andreas Weinmann, Hichem Snoussi,
- Abstract summary: The European Space Agency (ESA) has a profound interest in reliable crater detection, since craters pose a risk to safe lunar landings.<n>It is non-trivial due to the vast amount of craters of various sizes and shapes, as well as challenging conditions such as varying illumination and rugged terrain.<n>We propose a deep-learning crater detection algorithm based on the OWLv2 model, that has proven highly effective in various computer vision tasks.
- Score: 2.6038033465934083
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The European Space Agency (ESA), driven by its ambitions on planned lunar missions with the Argonaut lander, has a profound interest in reliable crater detection, since craters pose a risk to safe lunar landings. This task is usually addressed with automated crater detection algorithms (CDA) based on deep learning techniques. It is non-trivial due to the vast amount of craters of various sizes and shapes, as well as challenging conditions such as varying illumination and rugged terrain. Therefore, we propose a deep-learning CDA based on the OWLv2 model, which is built on a Vision Transformer, that has proven highly effective in various computer vision tasks. For fine-tuning, we utilize a manually labeled dataset fom the IMPACT project, that provides crater annotations on high-resolution Lunar Reconnaissance Orbiter Camera Calibrated Data Record images. We insert trainable parameters using a parameter-efficient fine-tuning strategy with Low-Rank Adaptation, and optimize a combined loss function consisting of Complete Intersection over Union (CIoU) for localization and a contrastive loss for classification. We achieve satisfactory visual results, along with a maximum recall of 94.0% and a maximum precision of 73.1% on a test dataset from IMPACT. Our method achieves reliable crater detection across challenging lunar imaging conditions, paving the way for robust crater analysis in future lunar exploration.
Related papers
- SCAFusion: A Multimodal 3D Detection Framework for Small Object Detection in Lunar Surface Exploration [18.857802421595235]
This paper presents SCAFusion, a multimodal 3D object detection model tailored for lunar robotic missions.<n>With negligible increase in parameters, our model achieves 69.7% mAP and 72.1% NDS on the nuScenes validation set.<n>In simulated lunar environments built on Isaac Sim, SCAFusion achieves 90.93% mAP, outperforming the baseline by 11.5%.
arXiv Detail & Related papers (2025-12-27T07:08:03Z) - AI-Enabled Crater-Based Navigation for Lunar Mapping [12.60100558410094]
Crater-Based Navigation (CBN) uses the ubiquitous impact craters of the Moon observed on images as natural landmarks to determine the six degrees of freedom pose of a spacecraft.<n> STELLA is the first end-to-end CBN pipeline for long-duration lunar mapping.<n>To rigorously test STELLA, we introduce CRESENT-365 - the first public dataset that emulates a year-long lunar mapping mission.
arXiv Detail & Related papers (2025-09-25T05:09:41Z) - HazyDet: Open-Source Benchmark for Drone-View Object Detection with Depth-Cues in Hazy Scenes [54.24350833692194]
HazyDet is the first, large-scale benchmark specifically designed for drone-view object detection in hazy conditions.<n>We propose the Depth-Conditioned Detector (DeCoDet) to address the severe visual degradation induced by haze.<n>HazyDet provides a challenging and realistic testbed for advancing detection algorithms.
arXiv Detail & Related papers (2024-09-30T00:11:40Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - Innovative Horizons in Aerial Imagery: LSKNet Meets DiffusionDet for
Advanced Object Detection [55.2480439325792]
We present an in-depth evaluation of an object detection model that integrates the LSKNet backbone with the DiffusionDet head.
The proposed model achieves a mean average precision (MAP) of approximately 45.7%, which is a significant improvement.
This advancement underscores the effectiveness of the proposed modifications and sets a new benchmark in aerial image analysis.
arXiv Detail & Related papers (2023-11-21T19:49:13Z) - Global Context Aggregation Network for Lightweight Saliency Detection of
Surface Defects [70.48554424894728]
We develop a Global Context Aggregation Network (GCANet) for lightweight saliency detection of surface defects on the encoder-decoder structure.
First, we introduce a novel transformer encoder on the top layer of the lightweight backbone, which captures global context information through a novel Depth-wise Self-Attention (DSA) module.
The experimental results on three public defect datasets demonstrate that the proposed network achieves a better trade-off between accuracy and running efficiency compared with other 17 state-of-the-art methods.
arXiv Detail & Related papers (2023-09-22T06:19:11Z) - Deep learning universal crater detection using Segment Anything Model
(SAM) [6.729108277517129]
Craters are amongst the most important morphological features in planetary exploration.
Machine learning (ML) and computer vision have been successfully applied for both detecting craters and estimating their size.
We present a universal crater detection scheme that is based on the recently proposed Segment Anything Model (SAM) from META AI.
arXiv Detail & Related papers (2023-04-16T12:36:37Z) - Autonomous crater detection on asteroids using a fully-convolutional
neural network [1.3750624267664155]
This paper shows the application of autonomous Crater Detection using the U-Net, a Fully-Convolutional Neural Network, on Ceres.
The U-Net is trained on optical images of the Moon Global Morphology Mosaic based on data collected by the LRO and manual crater catalogues.
The trained model has been fine-tuned using 100, 500 and 1000 additional images of Ceres.
arXiv Detail & Related papers (2022-04-01T14:34:11Z) - The KFIoU Loss for Rotated Object Detection [115.334070064346]
In this paper, we argue that one effective alternative is to devise an approximate loss who can achieve trend-level alignment with SkewIoU loss.
Specifically, we model the objects as Gaussian distribution and adopt Kalman filter to inherently mimic the mechanism of SkewIoU.
The resulting new loss called KFIoU is easier to implement and works better compared with exact SkewIoU.
arXiv Detail & Related papers (2022-01-29T10:54:57Z) - SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural
Networks [81.64530401885476]
We propose a self-supervised LiDAR odometry method, dubbed SelfVoxeLO, to tackle these two difficulties.
Specifically, we propose a 3D convolution network to process the raw LiDAR data directly, which extracts features that better encode the 3D geometric patterns.
We evaluate our method's performances on two large-scale datasets, i.e., KITTI and Apollo-SouthBay.
arXiv Detail & Related papers (2020-10-19T09:23:39Z) - Lunar Terrain Relative Navigation Using a Convolutional Neural Network
for Visual Crater Detection [39.20073801639923]
This paper presents a system that uses a convolutional neural network (CNN) and image processing methods to track the location of a simulated spacecraft.
The CNN, called LunaNet, visually detects craters in the simulated camera frame and those detections are matched to known lunar craters in the region of the current estimated spacecraft position.
arXiv Detail & Related papers (2020-07-15T14:19:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.