ROGR: Relightable 3D Objects using Generative Relighting
- URL: http://arxiv.org/abs/2510.03163v1
- Date: Fri, 03 Oct 2025 16:35:22 GMT
- Title: ROGR: Relightable 3D Objects using Generative Relighting
- Authors: Jiapeng Tang, Matthew Lavine, Dor Verbin, Stephan J. Garbin, Matthias Nießner, Ricardo Martin Brualla, Pratul P. Srinivasan, Philipp Henzler,
- Abstract summary: We introduce ROGR, a novel approach that reconstructs a relightable 3D model of an object captured from multiple views.<n>We train a lighting-conditioned Neural Radiance Field (NeRF) that outputs the object's appearance under any input environmental lighting.<n>We evaluate our approach on the established TensoIR and Stanford-ORB datasets, and showcase our approach on real-world object captures.
- Score: 71.35020300131261
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We introduce ROGR, a novel approach that reconstructs a relightable 3D model of an object captured from multiple views, driven by a generative relighting model that simulates the effects of placing the object under novel environment illuminations. Our method samples the appearance of the object under multiple lighting environments, creating a dataset that is used to train a lighting-conditioned Neural Radiance Field (NeRF) that outputs the object's appearance under any input environmental lighting. The lighting-conditioned NeRF uses a novel dual-branch architecture to encode the general lighting effects and specularities separately. The optimized lighting-conditioned NeRF enables efficient feed-forward relighting under arbitrary environment maps without requiring per-illumination optimization or light transport simulation. We evaluate our approach on the established TensoIR and Stanford-ORB datasets, where it improves upon the state-of-the-art on most metrics, and showcase our approach on real-world object captures.
Related papers
- LuxDiT: Lighting Estimation with Video Diffusion Transformer [66.60450792095901]
Estimating scene lighting from a single image or video remains a longstanding challenge in computer vision and graphics.<n>We propose LuxDiT, a novel data-driven approach that fine-tunes a video diffusion transformer to generate HDR environment maps conditioned on visual input.
arXiv Detail & Related papers (2025-09-03T19:59:20Z) - EnvGS: Modeling View-Dependent Appearance with Environment Gaussian [78.74634059559891]
EnvGS is a novel approach that employs a set of Gaussian primitives as an explicit 3D representation for capturing reflections of environments.<n>To efficiently render these environment Gaussian primitives, we developed a ray-tracing-based reflection that leverages the GPU's RT core for fast rendering.<n>Results from multiple real-world and synthetic datasets demonstrate that our method produces significantly more detailed reflections.
arXiv Detail & Related papers (2024-12-19T18:59:57Z) - DifFRelight: Diffusion-Based Facial Performance Relighting [12.909429637057343]
We present a novel framework for free-viewpoint facial performance relighting using diffusion-based image-to-image translation.
We train a diffusion model for precise lighting control, enabling high-fidelity relit facial images from flat-lit inputs.
The model accurately reproduces complex lighting effects like eye reflections, subsurface scattering, self-shadowing, and translucency.
arXiv Detail & Related papers (2024-10-10T17:56:44Z) - PBIR-NIE: Glossy Object Capture under Non-Distant Lighting [30.325872237020395]
Glossy objects present a significant challenge for 3D reconstruction from multi-view input images under natural lighting.
We introduce PBIR-NIE, an inverse rendering framework designed to holistically capture the geometry, material attributes, and surrounding illumination of such objects.
arXiv Detail & Related papers (2024-08-13T13:26:24Z) - Relighting Scenes with Object Insertions in Neural Radiance Fields [24.18050535794117]
We propose a novel NeRF-based pipeline for inserting object NeRFs into scene NeRFs.
The proposed method achieves realistic relighting effects in extensive experimental evaluations.
arXiv Detail & Related papers (2024-06-21T00:58:58Z) - SpecNeRF: Gaussian Directional Encoding for Specular Reflections [43.110815974867315]
We propose a learnable Gaussian directional encoding to better model the view-dependent effects under near-field lighting conditions.
Our new directional encoding captures the spatially-varying nature of near-field lighting and emulates the behavior of prefiltered environment maps.
It enables the efficient evaluation of preconvolved specular color at any 3D location with varying roughness coefficients.
arXiv Detail & Related papers (2023-12-20T15:20:25Z) - Neural Relighting with Subsurface Scattering by Learning the Radiance
Transfer Gradient [73.52585139592398]
We propose a novel framework for learning the radiance transfer field via volume rendering.
We will release our code and a novel light stage dataset of objects with subsurface scattering effects publicly available.
arXiv Detail & Related papers (2023-06-15T17:56:04Z) - NeAI: A Pre-convoluted Representation for Plug-and-Play Neural Ambient
Illumination [28.433403714053103]
We propose a framework named neural ambient illumination (NeAI)
NeAI uses Neural Radiance Fields (NeRF) as a lighting model to handle complex lighting in a physically based way.
Experiments demonstrate the superior performance of novel-view rendering compared to previous works.
arXiv Detail & Related papers (2023-04-18T06:32:30Z) - Neural Light Field Estimation for Street Scenes with Differentiable
Virtual Object Insertion [129.52943959497665]
Existing works on outdoor lighting estimation typically simplify the scene lighting into an environment map.
We propose a neural approach that estimates the 5D HDR light field from a single image.
We show the benefits of our AR object insertion in an autonomous driving application.
arXiv Detail & Related papers (2022-08-19T17:59:16Z) - Sparse Needlets for Lighting Estimation with Spherical Transport Loss [89.52531416604774]
NeedleLight is a new lighting estimation model that represents illumination with needlets and allows lighting estimation in both frequency domain and spatial domain jointly.
Extensive experiments show that NeedleLight achieves superior lighting estimation consistently across multiple evaluation metrics as compared with state-of-the-art methods.
arXiv Detail & Related papers (2021-06-24T15:19:42Z) - NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination [60.89737319987051]
We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
arXiv Detail & Related papers (2021-06-03T16:18:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.