Adapting Stereo Vision From Objects To 3D Lunar Surface Reconstruction with the StereoLunar Dataset
- URL: http://arxiv.org/abs/2510.18172v1
- Date: Mon, 20 Oct 2025 23:50:52 GMT
- Title: Adapting Stereo Vision From Objects To 3D Lunar Surface Reconstruction with the StereoLunar Dataset
- Authors: Clementine Grethen, Simone Gasparini, Geraldine Morin, Jeremy Lebreton, Lucas Marti, Manuel Sanchez-Gestido,
- Abstract summary: We introduce LunarStereo, the first open dataset of stereo image pairs of the Moon.<n>It covers diverse altitudes, lighting conditions, and viewing angles around the lunar South Pole.<n>We adapt the MASt3R model to the lunar domain through fine-tuning on LunarStereo.
- Score: 0.12314765641075437
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate 3D reconstruction of lunar surfaces is essential for space exploration. However, existing stereo vision reconstruction methods struggle in this context due to the Moon's lack of texture, difficult lighting variations, and atypical orbital trajectories. State-of-the-art deep learning models, trained on human-scale datasets, have rarely been tested on planetary imagery and cannot be transferred directly to lunar conditions. To address this issue, we introduce LunarStereo, the first open dataset of photorealistic stereo image pairs of the Moon, simulated using ray tracing based on high-resolution topography and reflectance models. It covers diverse altitudes, lighting conditions, and viewing angles around the lunar South Pole, offering physically grounded supervision for 3D reconstruction tasks. Based on this dataset, we adapt the MASt3R model to the lunar domain through fine-tuning on LunarStereo. We validate our approach through extensive qualitative and quantitative experiments on both synthetic and real lunar data, evaluating 3D surface reconstruction and relative pose estimation. Extensive experiments on synthetic and real lunar data validate the approach, demonstrating significant improvements over zero-shot baselines and paving the way for robust cross-scale generalization in extraterrestrial environments.
Related papers
- Lunar-G2R: Geometry-to-Reflectance Learning for High-Fidelity Lunar BRDF Estimation [0.11242503819703255]
We propose a geometry-to-reflectance learning framework that predicts spatially varying BRDF parameters directly from a lunar digital elevation model (DEM)<n>Experiments on a geographically held-out region of the Tycho crater show that our approach reduces photometric error by 38 % compared to a state-of-the-art baseline.
arXiv Detail & Related papers (2026-01-15T14:39:25Z) - High-fidelity lunar topographic reconstruction across diverse terrain and illumination environments using deep learning [4.190898032627423]
This study builds upon a previously proposed DL framework by incorporating a more robust scale recovery scheme.<n>It reliably reconstructs topography across lunar features of diverse scales, morphologies, and geological ages.<n>These findings suggest that DL-based approaches have the potential to leverage extensive lunar datasets to support advanced exploration missions.
arXiv Detail & Related papers (2026-01-14T13:21:09Z) - SAM 3D: 3Dfy Anything in Images [99.1053358868456]
We present SAM 3D, a generative model for visually grounded 3D object reconstruction, predicting geometry, texture, and layout from a single image.<n>We achieve this with a human- and model-in-the-loop pipeline for annotating object shape, texture, and pose.<n>We will release our code and model weights, an online demo, and a new challenging benchmark for in-the-wild 3D object reconstruction.
arXiv Detail & Related papers (2025-11-20T18:31:46Z) - AI-Enabled Crater-Based Navigation for Lunar Mapping [12.60100558410094]
Crater-Based Navigation (CBN) uses the ubiquitous impact craters of the Moon observed on images as natural landmarks to determine the six degrees of freedom pose of a spacecraft.<n> STELLA is the first end-to-end CBN pipeline for long-duration lunar mapping.<n>To rigorously test STELLA, we introduce CRESENT-365 - the first public dataset that emulates a year-long lunar mapping mission.
arXiv Detail & Related papers (2025-09-25T05:09:41Z) - NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images [62.752710734332894]
NeRSP is a Neural 3D reconstruction technique for Reflective surfaces with Sparse Polarized images.
We derive photometric and geometric cues from the polarimetric image formation model and multiview azimuth consistency.
We achieve the state-of-the-art surface reconstruction results with only 6 views as input.
arXiv Detail & Related papers (2024-06-11T09:53:18Z) - Reconstructing Satellites in 3D from Amateur Telescope Images [44.20773507571372]
We propose a novel computational imaging framework that overcomes obstacles by integrating a hybrid image pre-processing pipeline.<n>We validate our approach on both synthetic satellite datasets and on-sky observations of China's Tiangong Space Station and the International Space Station.<n>Our framework enables high-fidelity 3D satellite monitoring from Earth, offering a cost-effective alternative for space situational awareness.
arXiv Detail & Related papers (2024-04-29T03:13:09Z) - Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion [77.34078223594686]
We propose a novel architecture for direct 3D scene generation by introducing diffusion models into 3D sparse representations and combining them with neural rendering techniques.
Specifically, our approach generates texture colors at the point level for a given geometry using a 3D diffusion model first, which is then transformed into a scene representation in a feed-forward manner.
Experiments in two city-scale datasets show that our model demonstrates proficiency in generating photo-realistic street-view image sequences and cross-view urban scenes from satellite imagery.
arXiv Detail & Related papers (2024-01-19T16:15:37Z) - TriHuman : A Real-time and Controllable Tri-plane Representation for
Detailed Human Geometry and Appearance Synthesis [76.73338151115253]
TriHuman is a novel human-tailored, deformable, and efficient tri-plane representation.
We non-rigidly warp global ray samples into our undeformed tri-plane texture space.
We show how such a tri-plane feature representation can be conditioned on the skeletal motion to account for dynamic appearance and geometry changes.
arXiv Detail & Related papers (2023-12-08T16:40:38Z) - Learning to Grasp on the Moon from 3D Octree Observations with Deep
Reinforcement Learning [0.0]
This work investigates the applicability of deep reinforcement learning for vision-based robotic grasping of objects on the Moon.
A novel simulation environment with procedurally-generated datasets is created to train agents under challenging conditions.
A model-free off-policy actor-critic algorithm is then employed for end-to-end learning of a policy.
arXiv Detail & Related papers (2022-08-01T12:59:03Z) - Urban Radiance Fields [77.43604458481637]
We perform 3D reconstruction and novel view synthesis from data captured by scanning platforms commonly deployed for world mapping in urban outdoor environments.
Our approach extends Neural Radiance Fields, which has been demonstrated to synthesize realistic novel images for small scenes in controlled settings.
Each of these three extensions provides significant performance improvements in experiments on Street View data.
arXiv Detail & Related papers (2021-11-29T15:58:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.