NeRRF: 3D Reconstruction and View Synthesis for Transparent and Specular
Objects with Neural Refractive-Reflective Fields
- URL: http://arxiv.org/abs/2309.13039v1
- Date: Fri, 22 Sep 2023 17:59:12 GMT
- Title: NeRRF: 3D Reconstruction and View Synthesis for Transparent and Specular
Objects with Neural Refractive-Reflective Fields
- Authors: Xiaoxue Chen, Junchen Liu, Hao Zhao, Guyue Zhou, Ya-Qin Zhang
- Abstract summary: We introduce the refractive-reflective field to Neural radiance fields (NeRF)
NeRF uses straight rays and fails to deal with complicated light path changes caused by refraction and reflection.
We propose a virtual cone supersampling technique to achieve efficient and effective anti-aliasing.
- Score: 23.099784003061618
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural radiance fields (NeRF) have revolutionized the field of image-based
view synthesis. However, NeRF uses straight rays and fails to deal with
complicated light path changes caused by refraction and reflection. This
prevents NeRF from successfully synthesizing transparent or specular objects,
which are ubiquitous in real-world robotics and A/VR applications. In this
paper, we introduce the refractive-reflective field. Taking the object
silhouette as input, we first utilize marching tetrahedra with a progressive
encoding to reconstruct the geometry of non-Lambertian objects and then model
refraction and reflection effects of the object in a unified framework using
Fresnel terms. Meanwhile, to achieve efficient and effective anti-aliasing, we
propose a virtual cone supersampling technique. We benchmark our method on
different shapes, backgrounds and Fresnel terms on both real-world and
synthetic datasets. We also qualitatively and quantitatively benchmark the
rendering results of various editing applications, including material editing,
object replacement/insertion, and environment illumination estimation. Codes
and data are publicly available at https://github.com/dawning77/NeRRF.
Related papers
- Relighting Scenes with Object Insertions in Neural Radiance Fields [24.18050535794117]
We propose a novel NeRF-based pipeline for inserting object NeRFs into scene NeRFs.
The proposed method achieves realistic relighting effects in extensive experimental evaluations.
arXiv Detail & Related papers (2024-06-21T00:58:58Z) - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - Taming Latent Diffusion Model for Neural Radiance Field Inpainting [63.297262813285265]
Neural Radiance Field (NeRF) is a representation for 3D reconstruction from multi-view images.
We propose tempering the diffusion model'sity with per-scene customization and mitigating the textural shift with masked training.
Our framework yields state-of-the-art NeRF inpainting results on various real-world scenes.
arXiv Detail & Related papers (2024-04-15T17:59:57Z) - Inverse Rendering of Glossy Objects via the Neural Plenoptic Function and Radiance Fields [45.64333510966844]
Inverse rendering aims at recovering both geometry and materials of objects.
We propose a novel 5D Neural Plenoptic Function (NeP) based on NeRFs and ray tracing.
Our method can reconstruct high-fidelity geometry/materials of challenging glossy objects with complex lighting interactions from nearby objects.
arXiv Detail & Related papers (2024-03-24T16:34:47Z) - Multi-Space Neural Radiance Fields [74.46513422075438]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects.
We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.
Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - NEMTO: Neural Environment Matting for Novel View and Relighting Synthesis of Transparent Objects [28.62468618676557]
We propose NEMTO, the first end-to-end neural rendering pipeline to model 3D transparent objects.
With 2D images of the transparent object as input, our method is capable of high-quality novel view and relighting synthesis.
arXiv Detail & Related papers (2023-03-21T15:50:08Z) - NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination [60.89737319987051]
We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
arXiv Detail & Related papers (2021-06-03T16:18:01Z) - iNeRF: Inverting Neural Radiance Fields for Pose Estimation [68.91325516370013]
We present iNeRF, a framework that performs mesh-free pose estimation by "inverting" a Neural RadianceField (NeRF)
NeRFs have been shown to be remarkably effective for the task of view synthesis.
arXiv Detail & Related papers (2020-12-10T18:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.