Zero-Shot UAV Navigation in Forests via Relightable 3D Gaussian Splatting
- URL: http://arxiv.org/abs/2602.07101v1
- Date: Fri, 06 Feb 2026 15:51:03 GMT
- Title: Zero-Shot UAV Navigation in Forests via Relightable 3D Gaussian Splatting
- Authors: Zinan Lv, Yeqian Qian, Chen Sang, Hao Liu, Danping Zou, Ming Yang,
- Abstract summary: UAV navigation in unstructured outdoor environments using passive monocular vision is hindered by the substantial visual domain gap between simulation and reality.<n>We propose a novel end-to-end reinforcement learning framework designed for effective zero-shot transfer to unstructured outdoors.<n>We show that a lightweight quadrotor achieves robust, collision-free navigation in complex forest environments at speeds up to 10 m/s.
- Score: 11.31291385822484
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: UAV navigation in unstructured outdoor environments using passive monocular vision is hindered by the substantial visual domain gap between simulation and reality. While 3D Gaussian Splatting enables photorealistic scene reconstruction from real-world data, existing methods inherently couple static lighting with geometry, severely limiting policy generalization to dynamic real-world illumination. In this paper, we propose a novel end-to-end reinforcement learning framework designed for effective zero-shot transfer to unstructured outdoors. Within a high-fidelity simulation grounded in real-world data, our policy is trained to map raw monocular RGB observations directly to continuous control commands. To overcome photometric limitations, we introduce Relightable 3D Gaussian Splatting, which decomposes scene components to enable explicit, physically grounded editing of environmental lighting within the neural representation. By augmenting training with diverse synthesized lighting conditions ranging from strong directional sunlight to diffuse overcast skies, we compel the policy to learn robust, illumination-invariant visual features. Extensive real-world experiments demonstrate that a lightweight quadrotor achieves robust, collision-free navigation in complex forest environments at speeds up to 10 m/s, exhibiting significant resilience to drastic lighting variations without fine-tuning.
Related papers
- ROGR: Relightable 3D Objects using Generative Relighting [71.35020300131261]
We introduce ROGR, a novel approach that reconstructs a relightable 3D model of an object captured from multiple views.<n>We train a lighting-conditioned Neural Radiance Field (NeRF) that outputs the object's appearance under any input environmental lighting.<n>We evaluate our approach on the established TensoIR and Stanford-ORB datasets, and showcase our approach on real-world object captures.
arXiv Detail & Related papers (2025-10-03T16:35:22Z) - LuxDiT: Lighting Estimation with Video Diffusion Transformer [66.60450792095901]
Estimating scene lighting from a single image or video remains a longstanding challenge in computer vision and graphics.<n>We propose LuxDiT, a novel data-driven approach that fine-tunes a video diffusion transformer to generate HDR environment maps conditioned on visual input.
arXiv Detail & Related papers (2025-09-03T19:59:20Z) - GaRe: Relightable 3D Gaussian Splatting for Outdoor Scenes from Unconstrained Photo Collections [19.8966661817631]
We propose a 3D Gaussian splatting-based framework for outdoor relighting.<n>Our approach enables simultaneously diverse shading manipulation and the generation of dynamic shadow effects.
arXiv Detail & Related papers (2025-07-28T04:29:57Z) - PBIR-NIE: Glossy Object Capture under Non-Distant Lighting [30.325872237020395]
Glossy objects present a significant challenge for 3D reconstruction from multi-view input images under natural lighting.
We introduce PBIR-NIE, an inverse rendering framework designed to holistically capture the geometry, material attributes, and surrounding illumination of such objects.
arXiv Detail & Related papers (2024-08-13T13:26:24Z) - A Real-time Method for Inserting Virtual Objects into Neural Radiance
Fields [38.370278809341954]
We present the first real-time method for inserting a rigid virtual object into a neural radiance field.
By exploiting the rich information about lighting and geometry in a NeRF, our method overcomes several challenges of object insertion in augmented reality.
arXiv Detail & Related papers (2023-10-09T16:26:34Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Neural Light Field Estimation for Street Scenes with Differentiable
Virtual Object Insertion [129.52943959497665]
Existing works on outdoor lighting estimation typically simplify the scene lighting into an environment map.
We propose a neural approach that estimates the 5D HDR light field from a single image.
We show the benefits of our AR object insertion in an autonomous driving application.
arXiv Detail & Related papers (2022-08-19T17:59:16Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - IllumiNet: Transferring Illumination from Planar Surfaces to Virtual
Objects in Augmented Reality [38.83696624634213]
This paper presents an illumination estimation method for virtual objects in real environment by learning.
Given a single RGB image, our method directly infers the relit virtual object by transferring the illumination features extracted from planar surfaces in the scene to the desired geometries.
arXiv Detail & Related papers (2020-07-12T13:11:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.