NeAI: A Pre-convoluted Representation for Plug-and-Play Neural Ambient
Illumination
- URL: http://arxiv.org/abs/2304.08757v1
- Date: Tue, 18 Apr 2023 06:32:30 GMT
- Title: NeAI: A Pre-convoluted Representation for Plug-and-Play Neural Ambient
Illumination
- Authors: Yiyu Zhuang, Qi Zhang, Xuan Wang, Hao Zhu, Ying Feng, Xiaoyu Li, Ying
Shan, Xun Cao
- Abstract summary: We propose a framework named neural ambient illumination (NeAI)
NeAI uses Neural Radiance Fields (NeRF) as a lighting model to handle complex lighting in a physically based way.
Experiments demonstrate the superior performance of novel-view rendering compared to previous works.
- Score: 28.433403714053103
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in implicit neural representation have demonstrated the
ability to recover detailed geometry and material from multi-view images.
However, the use of simplified lighting models such as environment maps to
represent non-distant illumination, or using a network to fit indirect light
modeling without a solid basis, can lead to an undesirable decomposition
between lighting and material. To address this, we propose a fully
differentiable framework named neural ambient illumination (NeAI) that uses
Neural Radiance Fields (NeRF) as a lighting model to handle complex lighting in
a physically based way. Together with integral lobe encoding for
roughness-adaptive specular lobe and leveraging the pre-convoluted background
for accurate decomposition, the proposed method represents a significant step
towards integrating physically based rendering into the NeRF representation.
The experiments demonstrate the superior performance of novel-view rendering
compared to previous works, and the capability to re-render objects under
arbitrary NeRF-style environments opens up exciting possibilities for bridging
the gap between virtual and real-world scenes. The project and supplementary
materials are available at https://yiyuzhuang.github.io/NeAI/.
Related papers
- PBIR-NIE: Glossy Object Capture under Non-Distant Lighting [30.325872237020395]
Glossy objects present a significant challenge for 3D reconstruction from multi-view input images under natural lighting.
We introduce PBIR-NIE, an inverse rendering framework designed to holistically capture the geometry, material attributes, and surrounding illumination of such objects.
arXiv Detail & Related papers (2024-08-13T13:26:24Z) - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - Neural Relighting with Subsurface Scattering by Learning the Radiance
Transfer Gradient [73.52585139592398]
We propose a novel framework for learning the radiance transfer field via volume rendering.
We will release our code and a novel light stage dataset of objects with subsurface scattering effects publicly available.
arXiv Detail & Related papers (2023-06-15T17:56:04Z) - NeuS-PIR: Learning Relightable Neural Surface using Pre-Integrated Rendering [23.482941494283978]
This paper presents a method, namely NeuS-PIR, for recovering relightable neural surfaces from multi-view images or video.
Unlike methods based on NeRF and discrete meshes, our method utilizes implicit neural surface representation to reconstruct high-quality geometry.
Our method enables advanced applications such as relighting, which can be seamlessly integrated with modern graphics engines.
arXiv Detail & Related papers (2023-06-13T09:02:57Z) - TensoIR: Tensorial Inverse Rendering [51.57268311847087]
TensoIR is a novel inverse rendering approach based on tensor factorization and neural fields.
TensoRF is a state-of-the-art approach for radiance field modeling.
arXiv Detail & Related papers (2023-04-24T21:39:13Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Physics-based Indirect Illumination for Inverse Rendering [70.27534648770057]
We present a physics-based inverse rendering method that learns the illumination, geometry, and materials of a scene from posed multi-view RGB images.
As a side product, our physics-based inverse rendering model also facilitates flexible and realistic material editing as well as relighting.
arXiv Detail & Related papers (2022-12-09T07:33:49Z) - Modeling Indirect Illumination for Inverse Rendering [31.734819333921642]
In this paper, we propose a novel approach to efficiently recovering spatially-varying indirect illumination.
The key insight is that indirect illumination can be conveniently derived from the neural radiance field learned from input images.
Experiments on both synthetic and real data demonstrate the superior performance of our approach compared to previous work.
arXiv Detail & Related papers (2022-04-14T09:10:55Z) - NeILF: Neural Incident Light Field for Physically-based Material
Estimation [31.230609753253713]
We present a differentiable rendering framework for material and lighting estimation from multi-view images and a reconstructed geometry.
In the framework, we represent scene lightings as the Neural Incident Light Field (NeILF) and material properties as the surface BRDF modelled by multi-layer perceptrons.
arXiv Detail & Related papers (2022-03-14T15:23:04Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.