NeRF as a Non-Distant Environment Emitter in Physics-based Inverse Rendering
- URL: http://arxiv.org/abs/2402.04829v2
- Date: Wed, 1 May 2024 16:50:48 GMT
- Title: NeRF as a Non-Distant Environment Emitter in Physics-based Inverse Rendering
- Authors: Jingwang Ling, Ruihan Yu, Feng Xu, Chun Du, Shuang Zhao,
- Abstract summary: We introduce NeRF as a non-distant environment emitter into the inverse rendering pipeline.
Our results demonstrate that our NeRF-based emitter offers a more precise representation of scene lighting, thereby improving the accuracy of inverse rendering.
- Score: 15.876404576998372
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Physics-based inverse rendering enables joint optimization of shape, material, and lighting based on captured 2D images. To ensure accurate reconstruction, using a light model that closely resembles the captured environment is essential. Although the widely adopted distant environmental lighting model is adequate in many cases, we demonstrate that its inability to capture spatially varying illumination can lead to inaccurate reconstructions in many real-world inverse rendering scenarios. To address this limitation, we incorporate NeRF as a non-distant environment emitter into the inverse rendering pipeline. Additionally, we introduce an emitter importance sampling technique for NeRF to reduce the rendering variance. Through comparisons on both real and synthetic datasets, our results demonstrate that our NeRF-based emitter offers a more precise representation of scene lighting, thereby improving the accuracy of inverse rendering.
Related papers
- Reflection Removal through Efficient Adaptation of Diffusion Transformers [30.68558779968187]
We introduce a diffusion-transformer (DiT) framework for single-image reflection removal.<n>We analyze existing reflection removal data sources for diversity, scalability, and photorealism.<n>We construct a physically based rendering pipeline in Blender to synthesize realistic glass materials and reflection effects.
arXiv Detail & Related papers (2025-12-04T17:12:39Z) - ROGR: Relightable 3D Objects using Generative Relighting [71.35020300131261]
We introduce ROGR, a novel approach that reconstructs a relightable 3D model of an object captured from multiple views.<n>We train a lighting-conditioned Neural Radiance Field (NeRF) that outputs the object's appearance under any input environmental lighting.<n>We evaluate our approach on the established TensoIR and Stanford-ORB datasets, and showcase our approach on real-world object captures.
arXiv Detail & Related papers (2025-10-03T16:35:22Z) - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs [65.80187860906115]
We propose a novel approach to improve NeRF's performance with sparse inputs.
We first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space.
We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray, which are then incorporated into the volume rendering.
arXiv Detail & Related papers (2024-03-25T15:56:17Z) - GANeRF: Leveraging Discriminators to Optimize Neural Radiance Fields [12.92658687936068]
We take advantage of generative adversarial networks (GANs) to produce realistic images and use them to enhance realism in 3D scene reconstruction with NeRFs.
We learn the patch distribution of a scene using an adversarial discriminator, which provides feedback to the radiance field reconstruction.
rendering artifacts are repaired directly in the underlying 3D representation by imposing multi-view path rendering constraints.
arXiv Detail & Related papers (2023-06-09T17:12:35Z) - Inverse Rendering of Translucent Objects using Physical and Neural
Renderers [13.706425832518093]
In this work, we propose an inverse model that estimates 3D shape, spatially-varying reflectance, homogeneous scattering parameters, and an environment illumination jointly from only a pair of captured images of a translucent object.
Because two reconstructions are differentiable, we can compute a reconstruction loss to assist parameter estimation.
We constructed a large-scale synthetic dataset of translucent objects, which consists of 117K scenes.
arXiv Detail & Related papers (2023-05-15T04:03:11Z) - NeAI: A Pre-convoluted Representation for Plug-and-Play Neural Ambient
Illumination [28.433403714053103]
We propose a framework named neural ambient illumination (NeAI)
NeAI uses Neural Radiance Fields (NeRF) as a lighting model to handle complex lighting in a physically based way.
Experiments demonstrate the superior performance of novel-view rendering compared to previous works.
arXiv Detail & Related papers (2023-04-18T06:32:30Z) - NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field
Indirect Illumination [48.42173911185454]
Inverse rendering methods aim to estimate geometry, materials and illumination from multi-view RGB images.
We propose an end-to-end inverse rendering pipeline that decomposes materials and illumination from multi-view images.
arXiv Detail & Related papers (2023-03-29T12:05:19Z) - Re-ReND: Real-time Rendering of NeRFs across Devices [56.081995086924216]
Re-ReND is designed to achieve real-time performance by converting the NeRF into a representation that can be efficiently processed by standard graphics pipelines.
We find that Re-ReND can achieve over a 2.6-fold increase in rendering speed versus the state-of-the-art without perceptible losses in quality.
arXiv Detail & Related papers (2023-03-15T15:59:41Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - iNeRF: Inverting Neural Radiance Fields for Pose Estimation [68.91325516370013]
We present iNeRF, a framework that performs mesh-free pose estimation by "inverting" a Neural RadianceField (NeRF)
NeRFs have been shown to be remarkably effective for the task of view synthesis.
arXiv Detail & Related papers (2020-12-10T18:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.