UAVLight: A Benchmark for Illumination-Robust 3D Reconstruction in Unmanned Aerial Vehicle (UAV) Scenes
- URL: http://arxiv.org/abs/2511.21565v1
- Date: Wed, 26 Nov 2025 16:38:29 GMT
- Title: UAVLight: A Benchmark for Illumination-Robust 3D Reconstruction in Unmanned Aerial Vehicle (UAV) Scenes
- Authors: Kang Du, Xue Liao, Junpeng Xia, Chaozheng Guo, Yi Gu, Yirui Guan, Duotun Wang, ShengHuang, Zeyu Wang,
- Abstract summary: UAVLight is a controlled-yet-real benchmark for illumination-robust 3D reconstruction.<n>Each scene is captured along repeatable, geo-referenced flight paths at multiple fixed times of day.<n>With standardized evaluation protocols across lighting conditions, UAVLight provides a reliable foundation for developing and benchmarking reconstruction methods.
- Score: 17.205790966354705
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Illumination inconsistency is a fundamental challenge in multi-view 3D reconstruction. Variations in sunlight direction, cloud cover, and shadows break the constant-lighting assumption underlying both classical multi-view stereo (MVS) and structure from motion (SfM) pipelines and recent neural rendering methods, leading to geometry drift, color inconsistency, and shadow imprinting. This issue is especially critical in UAV-based reconstruction, where long flight durations and outdoor environments make lighting changes unavoidable. However, existing datasets either restrict capture to short time windows, thus lacking meaningful illumination diversity, or span months and seasons, where geometric and semantic changes confound the isolated study of lighting robustness. We introduce UAVLight, a controlled-yet-real benchmark for illumination-robust 3D reconstruction. Each scene is captured along repeatable, geo-referenced flight paths at multiple fixed times of day, producing natural lighting variation under consistent geometry, calibration, and viewpoints. With standardized evaluation protocols across lighting conditions, UAVLight provides a reliable foundation for developing and benchmarking reconstruction methods that are consistent, faithful, and relightable in real outdoor environments.
Related papers
- Unifying Color and Lightness Correction with View-Adaptive Curve Adjustment for Robust 3D Novel View Synthesis [73.27997579020233]
We propose Luminance-GS++, a 3DGS-based framework for robust NVS under diverse illumination conditions.<n>Our method combines a globally view-adaptive lightness adjustment with a local pixel-wise residual refinement for precise color correction.
arXiv Detail & Related papers (2026-02-20T16:20:50Z) - Zero-Shot UAV Navigation in Forests via Relightable 3D Gaussian Splatting [11.31291385822484]
UAV navigation in unstructured outdoor environments using passive monocular vision is hindered by the substantial visual domain gap between simulation and reality.<n>We propose a novel end-to-end reinforcement learning framework designed for effective zero-shot transfer to unstructured outdoors.<n>We show that a lightweight quadrotor achieves robust, collision-free navigation in complex forest environments at speeds up to 10 m/s.
arXiv Detail & Related papers (2026-02-06T15:51:03Z) - SplatBright: Generalizable Low-Light Scene Reconstruction from Sparse Views via Physically-Guided Gaussian Enhancement [26.905118897488077]
SplatBright is the first generalizable 3D Gaussian framework for joint low-light enhancement and reconstruction from sparse sRGB inputs.<n>Our key idea is to integrate physically guided illumination modeling with geometry-appearance decoupling for consistent low-light reconstruction.<n>Experiments on public and self-collected datasets demonstrate that SplatBright achieves superior novel view synthesis, cross-view consistency, and better generalization to unseen low-light scenes compared with both 2D and 3D methods.
arXiv Detail & Related papers (2025-12-21T09:06:16Z) - Beyond a Single Light: A Large-Scale Aerial Dataset for Urban Scene Reconstruction Under Varying Illumination [27.470486341807316]
We introduceSkyLume, a dataset specifically designed for studying illumination robust 3D reconstruction in urban scene modeling.<n>We collect data from 10 urban regions data comprising more than 100k high resolution UAV images.<n>We provide per-scene LiDAR scans and accurate 3D ground-truth for assessing depth, surface normals, and reconstruction quality under varying illumination.
arXiv Detail & Related papers (2025-12-16T08:47:56Z) - Light-X: Generative 4D Video Rendering with Camera and Illumination Control [52.87059646145144]
Light-X is a video generation framework that enables controllable rendering from monocular videos with both viewpoint and illumination control.<n>To address the lack of paired multi-view and multi-illumination videos, we introduce Light-Syn, a degradation-based pipeline with inverse-mapping.
arXiv Detail & Related papers (2025-12-04T18:59:57Z) - Lumos3D: A Single-Forward Framework for Low-Light 3D Scene Restoration [10.184395697154448]
We introduce Lumos3D, a pose-free framework for 3D low-light scene restoration.<n>Built upon a geometry-grounded backbone, Lumos3D reconstructs a normal-light 3D Gaussian representation.<n>Experiments on real-world datasets demonstrate that Lumos3D achieves high- fidelity low-light 3D scene restoration.
arXiv Detail & Related papers (2025-11-12T23:42:03Z) - See through the Dark: Learning Illumination-affined Representations for Nighttime Occupancy Prediction [20.14637361013267]
LIAR is a novel framework that learns illumination-affined representations.<n>Experiments on both real and synthetic datasets demonstrate the superior performance of LIAR under challenging nighttime scenarios.
arXiv Detail & Related papers (2025-05-27T02:40:49Z) - IDArb: Intrinsic Decomposition for Arbitrary Number of Input Views and Illuminations [64.07859467542664]
Capturing geometric and material information from images remains a fundamental challenge in computer vision and graphics.<n>Traditional optimization-based methods often require hours of computational time to reconstruct geometry, material properties, and environmental lighting from dense multi-view inputs.<n>We introduce IDArb, a diffusion-based model designed to perform intrinsic decomposition on an arbitrary number of images under varying illuminations.
arXiv Detail & Related papers (2024-12-16T18:52:56Z) - ReCap: Better Gaussian Relighting with Cross-Environment Captures [51.2614945509044]
We present ReCap, a multi-task system for accurate 3D object relighting in unseen environments.<n>Specifically, ReCap jointly optimize multiple lighting representations that share a common set of material attributes.<n>This naturally harmonizes a coherent set of lighting representations around the mutual material attributes, exploiting commonalities and differences across varied object appearances.<n>Together with a streamlined shading function and effective post-processing, ReCap outperforms all leading competitors on an expanded relighting benchmark.
arXiv Detail & Related papers (2024-12-10T14:15:32Z) - RelitLRM: Generative Relightable Radiance for Large Reconstruction Models [52.672706620003765]
We propose RelitLRM for generating high-quality Gaussian splatting representations of 3D objects under novel illuminations.
Unlike prior inverse rendering methods requiring dense captures and slow optimization, RelitLRM adopts a feed-forward transformer-based model.
We show our sparse-view feed-forward RelitLRM offers competitive relighting results to state-of-the-art dense-view optimization-based baselines.
arXiv Detail & Related papers (2024-10-08T17:40:01Z) - SUNDIAL: 3D Satellite Understanding through Direct, Ambient, and Complex
Lighting Decomposition [17.660328148833134]
SUNDIAL is a comprehensive approach to 3D reconstruction of satellite imagery using neural radiance fields.
We learn satellite scene geometry, illumination components, and sun direction in this single-model approach.
We evaluate the performance of SUNDIAL against existing NeRF-based techniques for satellite scene modeling.
arXiv Detail & Related papers (2023-12-24T02:46:44Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.