Lunar-G2R: Geometry-to-Reflectance Learning for High-Fidelity Lunar BRDF Estimation
- URL: http://arxiv.org/abs/2601.10449v1
- Date: Thu, 15 Jan 2026 14:39:25 GMT
- Title: Lunar-G2R: Geometry-to-Reflectance Learning for High-Fidelity Lunar BRDF Estimation
- Authors: Clementine Grethen, Nicolas Menga, Roland Brochard, Geraldine Morin, Simone Gasparini, Jeremy Lebreton, Manuel Sanchez Gestido,
- Abstract summary: We propose a geometry-to-reflectance learning framework that predicts spatially varying BRDF parameters directly from a lunar digital elevation model (DEM)<n>Experiments on a geographically held-out region of the Tycho crater show that our approach reduces photometric error by 38 % compared to a state-of-the-art baseline.
- Score: 0.11242503819703255
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We address the problem of estimating realistic, spatially varying reflectance for complex planetary surfaces such as the lunar regolith, which is critical for high-fidelity rendering and vision-based navigation. Existing lunar rendering pipelines rely on simplified or spatially uniform BRDF models whose parameters are difficult to estimate and fail to capture local reflectance variations, limiting photometric realism. We propose Lunar-G2R, a geometry-to-reflectance learning framework that predicts spatially varying BRDF parameters directly from a lunar digital elevation model (DEM), without requiring multi-view imagery, controlled illumination, or dedicated reflectance-capture hardware at inference time. The method leverages a U-Net trained with differentiable rendering to minimize photometric discrepancies between real orbital images and physically based renderings under known viewing and illumination geometry. Experiments on a geographically held-out region of the Tycho crater show that our approach reduces photometric error by 38 % compared to a state-of-the-art baseline, while achieving higher PSNR and SSIM and improved perceptual similarity, capturing fine-scale reflectance variations absent from spatially uniform models. To our knowledge, this is the first method to infer a spatially varying reflectance model directly from terrain geometry.
Related papers
- MaterialRefGS: Reflective Gaussian Splatting with Multi-view Consistent Material Inference [83.38607296779423]
We show that multi-view consistent material inference with more physically-based environment modeling is key to learning accurate reflections with Gaussian Splatting.<n>Our method faithfully recovers both illumination and geometry, achieving state-of-the-art rendering quality in novel views synthesis.
arXiv Detail & Related papers (2025-10-13T13:29:20Z) - Reflections Unlock: Geometry-Aware Reflection Disentanglement in 3D Gaussian Splatting for Photorealistic Scenes Rendering [51.223347330075576]
Ref-Unlock is a novel geometry-aware reflection modeling framework based on 3D Gaussian Splatting.<n>Our approach employs a dual-branch representation with high-order spherical harmonics to capture high-frequency reflective details.<n>Our method thus offers an efficient and generalizable solution for realistic rendering of reflective scenes.
arXiv Detail & Related papers (2025-07-08T15:45:08Z) - BRDF-NeRF: Neural Radiance Fields with Optical Satellite Images and BRDF Modelling [0.0]
We introduce BRDF-NeRF, which incorporates the physically-based semi-empirical Rahman-Pinty-Verstraete (RPV) BRDF model.
BRDF-NeRF successfully synthesizes novel views from unseen angles and generates high-quality digital surface models.
arXiv Detail & Related papers (2024-09-18T14:28:52Z) - An Atmospheric Correction Integrated LULC Segmentation Model for High-Resolution Satellite Imagery [0.0]
This study employs look-up-table-based radiative transfer simulations to estimate the atmospheric path reflectance and transmittance.
The corrected surface reflectance data were subsequently used in supervised and semi-supervised segmentation models.
arXiv Detail & Related papers (2024-09-09T10:47:39Z) - NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images [62.752710734332894]
NeRSP is a Neural 3D reconstruction technique for Reflective surfaces with Sparse Polarized images.
We derive photometric and geometric cues from the polarimetric image formation model and multiview azimuth consistency.
We achieve the state-of-the-art surface reconstruction results with only 6 views as input.
arXiv Detail & Related papers (2024-06-11T09:53:18Z) - A Neural Height-Map Approach for the Binocular Photometric Stereo
Problem [36.404880059833324]
binocular photometric stereo (PS) framework has same acquisition speed as single view PS, however significantly improves the quality of the estimated geometry.
Our method achieves the state-of-the-art performance on the DiLiGenT-MV dataset adapted to binocular stereo setup as well as a new binocular photometric stereo dataset - LUCES-ST.
arXiv Detail & Related papers (2023-11-10T09:45:53Z) - High-Quality RGB-D Reconstruction via Multi-View Uncalibrated
Photometric Stereo and Gradient-SDF [48.29050063823478]
We present a novel multi-view RGB-D based reconstruction method that tackles camera pose, lighting, albedo, and surface normal estimation.
The proposed method formulates the image rendering process using specific physically-based model(s) and optimize the surface's volumetric quantities on the actual surface.
arXiv Detail & Related papers (2022-10-21T19:09:08Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.