Estimating Neural Reflectance Field from Radiance Field using Tree
Structures
- URL: http://arxiv.org/abs/2210.04217v1
- Date: Sun, 9 Oct 2022 10:21:31 GMT
- Title: Estimating Neural Reflectance Field from Radiance Field using Tree
Structures
- Authors: Xiu Li, Xiao Li, Yan Lu
- Abstract summary: We present a new method for estimating the Neural Reflectance Field (NReF) of an object from a set of posed multi-view images under unknown lighting.
NReF represents 3D geometry and appearance of objects in a disentangled manner, and are hard to be estimated from images only.
Our method solves this problem by exploiting the Neural Radiance Field (NeRF) as a proxy representation, from which we perform further decomposition.
- Score: 29.431165709718794
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a new method for estimating the Neural Reflectance Field (NReF) of
an object from a set of posed multi-view images under unknown lighting. NReF
represents 3D geometry and appearance of objects in a disentangled manner, and
are hard to be estimated from images only. Our method solves this problem by
exploiting the Neural Radiance Field (NeRF) as a proxy representation, from
which we perform further decomposition. A high-quality NeRF decomposition
relies on good geometry information extraction as well as good prior terms to
properly resolve ambiguities between different components. To extract
high-quality geometry information from radiance fields, we re-design a new
ray-casting based method for surface point extraction. To efficiently compute
and apply prior terms, we convert different prior terms into different type of
filter operations on the surface extracted from radiance field. We then employ
two type of auxiliary data structures, namely Gaussian KD-tree and octree, to
support fast querying of surface points and efficient computation of surface
filters during training. Based on this, we design a multi-stage decomposition
optimization pipeline for estimating neural reflectance field from neural
radiance fields. Extensive experiments show our method outperforms other
state-of-the-art methods on different data, and enable high-quality free-view
relighting as well as material editing tasks.
Related papers
- NePF: Neural Photon Field for Single-Stage Inverse Rendering [6.977356702921476]
We present a novel single-stage framework, Neural Photon Field (NePF), to address the ill-posed inverse rendering from multi-view images.
NePF achieves this unification by fully utilizing the physical implication behind the weight function of neural implicit surfaces.
We evaluate our method on both real and synthetic datasets.
arXiv Detail & Related papers (2023-11-20T06:15:46Z) - NeuS-PIR: Learning Relightable Neural Surface using Pre-Integrated Rendering [23.482941494283978]
This paper presents a method, namely NeuS-PIR, for recovering relightable neural surfaces from multi-view images or video.
Unlike methods based on NeRF and discrete meshes, our method utilizes implicit neural surface representation to reconstruct high-quality geometry.
Our method enables advanced applications such as relighting, which can be seamlessly integrated with modern graphics engines.
arXiv Detail & Related papers (2023-06-13T09:02:57Z) - Multi-Space Neural Radiance Fields [74.46513422075438]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects.
We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.
Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - TensoIR: Tensorial Inverse Rendering [51.57268311847087]
TensoIR is a novel inverse rendering approach based on tensor factorization and neural fields.
TensoRF is a state-of-the-art approach for radiance field modeling.
arXiv Detail & Related papers (2023-04-24T21:39:13Z) - NeILF++: Inter-Reflectable Light Fields for Geometry and Material
Estimation [36.09503501647977]
We formulate the lighting of a static scene as one neural incident light field (NeILF) and one outgoing neural radiance field (NeRF)
The proposed method is able to achieve state-of-the-art results in terms of geometry reconstruction quality, material estimation accuracy, and the fidelity of novel view rendering.
arXiv Detail & Related papers (2023-03-30T04:59:48Z) - NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field
Indirect Illumination [48.42173911185454]
Inverse rendering methods aim to estimate geometry, materials and illumination from multi-view RGB images.
We propose an end-to-end inverse rendering pipeline that decomposes materials and illumination from multi-view images.
arXiv Detail & Related papers (2023-03-29T12:05:19Z) - Differentiable Rendering with Reparameterized Volume Sampling [2.717399369766309]
In view synthesis, a neural radiance field approximates underlying density and radiance fields based on a sparse set of scene pictures.
This rendering algorithm is fully differentiable and facilitates gradient-based optimization of the fields.
We propose a simple end-to-end differentiable sampling algorithm based on inverse transform sampling.
arXiv Detail & Related papers (2023-02-21T19:56:50Z) - PVSeRF: Joint Pixel-, Voxel- and Surface-Aligned Radiance Field for
Single-Image Novel View Synthesis [52.546998369121354]
We present PVSeRF, a learning framework that reconstructs neural radiance fields from single-view RGB images.
We propose to incorporate explicit geometry reasoning and combine it with pixel-aligned features for radiance field prediction.
We show that the introduction of such geometry-aware features helps to achieve a better disentanglement between appearance and geometry.
arXiv Detail & Related papers (2022-02-10T07:39:47Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z) - NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination [60.89737319987051]
We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
arXiv Detail & Related papers (2021-06-03T16:18:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.