Comparative Evaluation of Traditional and Deep Learning Feature Matching Algorithms using Chandrayaan-2 Lunar Data
- URL: http://arxiv.org/abs/2509.04775v1
- Date: Fri, 05 Sep 2025 03:10:00 GMT
- Title: Comparative Evaluation of Traditional and Deep Learning Feature Matching Algorithms using Chandrayaan-2 Lunar Data
- Authors: R. Makharia, J. G. Singla, Amitabh, N. Dube, H. Sharma,
- Abstract summary: Aligning data from diverse lunar sensors is challenging due to differences in resolution, illumination, and sensor distortion.<n>We evaluate five feature matching algorithms using cross-modality image pairs from equatorial and polar regions.<n>A preprocessing pipeline is proposed, including georeferencing, resolution alignment, intensity normalization, and enhancements like adaptive histogram equalization.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate image registration is critical for lunar exploration, enabling surface mapping, resource localization, and mission planning. Aligning data from diverse lunar sensors -- optical (e.g., Orbital High Resolution Camera, Narrow and Wide Angle Cameras), hyperspectral (Imaging Infrared Spectrometer), and radar (e.g., Dual-Frequency Synthetic Aperture Radar, Selene/Kaguya mission) -- is challenging due to differences in resolution, illumination, and sensor distortion. We evaluate five feature matching algorithms: SIFT, ASIFT, AKAZE, RIFT2, and SuperGlue (a deep learning-based matcher), using cross-modality image pairs from equatorial and polar regions. A preprocessing pipeline is proposed, including georeferencing, resolution alignment, intensity normalization, and enhancements like adaptive histogram equalization, principal component analysis, and shadow correction. SuperGlue consistently yields the lowest root mean square error and fastest runtimes. Classical methods such as SIFT and AKAZE perform well near the equator but degrade under polar lighting. The results highlight the importance of preprocessing and learning-based approaches for robust lunar image registration across diverse conditions.
Related papers
- GDROS: A Geometry-Guided Dense Registration Framework for Optical-SAR Images under Large Geometric Transformations [24.22541638346487]
We propose GDROS, a geometry-guided dense registration framework leveraging global cross-modal image interactions.<n>First, we extract cross-modal deep features from optical and SAR images through a CNN-Transformer hybrid feature extraction module.<n>We then implement a least squares regression (LSR) module to geometrically constrain the predicted dense optical flow field.
arXiv Detail & Related papers (2025-11-01T15:40:34Z) - Loc$^2$: Interpretable Cross-View Localization via Depth-Lifted Local Feature Matching [80.57282092735991]
We propose an accurate and interpretable fine-grained cross-view localization method.<n>It estimates the 3 Degrees of Freedom (DoF) pose of a ground-level image by matching its local features with a reference aerial image.<n> Experiments show state-of-the-art accuracy in challenging scenarios such as cross-area testing and unknown orientation.
arXiv Detail & Related papers (2025-09-11T18:52:16Z) - MoonMetaSync: Lunar Image Registration Analysis [1.5371340850225041]
This paper compares scale-incubic (SIFT) and scale-variant (ORB) feature detection methods, alongside our novel feature detector, IntFeat, specifically applied to lunar imagery.
We evaluate these methods using low (128x128) and high-resolution (1024x1024) lunar image patches, providing insights into their performance across scales in challenging extraterrestrial environments.
IntFeat combines high-level features from SIFT and low-level features from ORB into a single vector space for robust lunar image registration.
arXiv Detail & Related papers (2024-10-14T22:05:48Z) - Deep Learning Based Speckle Filtering for Polarimetric SAR Images. Application to Sentinel-1 [51.404644401997736]
We propose a complete framework to remove speckle in polarimetric SAR images using a convolutional neural network.
Experiments show that the proposed approach offers exceptional results in both speckle reduction and resolution preservation.
arXiv Detail & Related papers (2024-08-28T10:07:17Z) - Robust Depth Enhancement via Polarization Prompt Fusion Tuning [112.88371907047396]
We present a framework that leverages polarization imaging to improve inaccurate depth measurements from various depth sensors.
Our method first adopts a learning-based strategy where a neural network is trained to estimate a dense and complete depth map from polarization data and a sensor depth map from different sensors.
To further improve the performance, we propose a Polarization Prompt Fusion Tuning (PPFT) strategy to effectively utilize RGB-based models pre-trained on large-scale datasets.
arXiv Detail & Related papers (2024-04-05T17:55:33Z) - Toward Efficient Visual Gyroscopes: Spherical Moments, Harmonics Filtering, and Masking Techniques for Spherical Camera Applications [83.8743080143778]
A visual gyroscope estimates camera rotation through images.
The integration of omnidirectional cameras, offering a larger field of view compared to traditional RGB cameras, has proven to yield more accurate and robust results.
Here, we address these challenges by introducing a novel visual gyroscope, which combines an Efficient Multi-Mask-Filter Rotation Estor and a Learning based optimization.
arXiv Detail & Related papers (2024-04-02T13:19:06Z) - Robust and accurate depth estimation by fusing LiDAR and Stereo [8.85338187686374]
We propose a precision and robust method for fusing the LiDAR and stereo cameras.
This method fully combines the advantages of the LiDAR and stereo camera.
We evaluate the proposed pipeline on the KITTI benchmark.
arXiv Detail & Related papers (2022-07-13T11:55:15Z) - A Deep Learning Ensemble Framework for Off-Nadir Geocentric Pose
Prediction [0.0]
Current software functions optimally only on near-nadir images, though off-nadir images are often the first sources of information following a natural disaster.
This study proposes a deep learning ensemble framework to predict geocentric pose using 5,923 near-nadir and off-nadir RGB satellite images of cities worldwide.
arXiv Detail & Related papers (2022-05-04T08:33:41Z) - Beyond Cross-view Image Retrieval: Highly Accurate Vehicle Localization
Using Satellite Image [91.29546868637911]
This paper addresses the problem of vehicle-mounted camera localization by matching a ground-level image with an overhead-view satellite map.
The key idea is to formulate the task as pose estimation and solve it by neural-net based optimization.
Experiments on standard autonomous vehicle localization datasets have confirmed the superiority of the proposed method.
arXiv Detail & Related papers (2022-04-10T19:16:58Z) - An Optimal Transport Perspective on Unpaired Image Super-Resolution [97.24140709634203]
Real-world image super-resolution (SR) tasks often do not have paired datasets, which limits the application of supervised techniques.<n>We investigate optimization problems which arise in such models and find two surprising observations.<n>We prove and empirically show that the learned map is biased, i.e., it does not actually transform the distribution of low-resolution images to high-resolution ones.
arXiv Detail & Related papers (2022-02-02T16:21:20Z) - Nonlinear Intensity Underwater Sonar Image Matching Method Based on
Phase Information and Deep Convolution Features [6.759506053568929]
This paper proposes a combined matching method based on phase information and deep convolution features.
It has two outstanding advantages: one is that the deep convolution features could be used to measure the similarity of the local and global positions of the sonar image.
arXiv Detail & Related papers (2021-11-29T02:36:49Z) - Nonlinear Intensity Sonar Image Matching based on Deep Convolution
Features [10.068137357857134]
This paper proposes a combined matching method based on phase information and deep convolution features.
It has two outstanding advantages: one is that deep convolution features could be used to measure the similarity of the local and global positions of the sonar image.
arXiv Detail & Related papers (2021-11-17T09:30:43Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.