Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance
Fields
- URL: http://arxiv.org/abs/2112.03907v1
- Date: Tue, 7 Dec 2021 18:58:37 GMT
- Title: Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance
Fields
- Authors: Dor Verbin, Peter Hedman, Ben Mildenhall, Todd Zickler, Jonathan T.
Barron, Pratul P. Srinivasan
- Abstract summary: We introduce Ref-NeRF, which replaces NeRF's parameterization of view-dependent outgoing radiance with a representation of reflected radiance and structures.
We show that our model's internal representation of outgoing radiance is interpretable and useful for scene editing.
- Score: 40.72851892972173
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Fields (NeRF) is a popular view synthesis technique that
represents a scene as a continuous volumetric function, parameterized by
multilayer perceptrons that provide the volume density and view-dependent
emitted radiance at each location. While NeRF-based techniques excel at
representing fine geometric structures with smoothly varying view-dependent
appearance, they often fail to accurately capture and reproduce the appearance
of glossy surfaces. We address this limitation by introducing Ref-NeRF, which
replaces NeRF's parameterization of view-dependent outgoing radiance with a
representation of reflected radiance and structures this function using a
collection of spatially-varying scene properties. We show that together with a
regularizer on normal vectors, our model significantly improves the realism and
accuracy of specular reflections. Furthermore, we show that our model's
internal representation of outgoing radiance is interpretable and useful for
scene editing.
Related papers
- NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - Simple-RF: Regularizing Sparse Input Radiance Fields with Simpler Solutions [5.699788926464751]
Neural Radiance Fields (NeRF) show impressive performance in photo-realistic free-view rendering of scenes.
Recent improvements on the NeRF such as TensoRF and ZipNeRF employ explicit models for faster optimization and rendering.
We show that supervising the depth estimated by a radiance field helps train it effectively with fewer views.
arXiv Detail & Related papers (2024-04-29T18:00:25Z) - Mesh2NeRF: Direct Mesh Supervision for Neural Radiance Field Representation and Generation [51.346733271166926]
Mesh2NeRF is an approach to derive ground-truth radiance fields from textured meshes for 3D generation tasks.
We validate the effectiveness of Mesh2NeRF across various tasks.
arXiv Detail & Related papers (2024-03-28T11:22:53Z) - PNeRFLoc: Visual Localization with Point-based Neural Radiance Fields [54.8553158441296]
We propose a novel visual localization framework, ie, PNeRFLoc, based on a unified point-based representation.
On the one hand, PNeRFLoc supports the initial pose estimation by matching 2D and 3D feature points.
On the other hand, it also enables pose refinement with novel view synthesis using rendering-based optimization.
arXiv Detail & Related papers (2023-12-17T08:30:00Z) - Anisotropic Neural Representation Learning for High-Quality Neural
Rendering [0.0]
We propose an anisotropic neural representation learning method that utilizes learnable view-dependent features to improve scene representation and reconstruction.
Our method is flexiable and can be plugged into NeRF-based frameworks.
arXiv Detail & Related papers (2023-11-30T07:29:30Z) - Rethinking Directional Integration in Neural Radiance Fields [8.012147983948665]
We introduce a modification to the NeRF rendering equation which is as simple as a few lines of code change for any NeRF variations.
We show that the modified equation can be interpreted as light field rendering with learned ray embeddings.
arXiv Detail & Related papers (2023-11-28T18:59:50Z) - TraM-NeRF: Tracing Mirror and Near-Perfect Specular Reflections through
Neural Radiance Fields [3.061835990893184]
Implicit representations like Neural Radiance Fields (NeRF) showed impressive results for rendering of complex scenes with fine details.
We present a novel reflection tracing method tailored for the involved volume rendering within NeRF.
We derive efficient strategies for importance sampling and the transmittance computation along rays from only few samples.
arXiv Detail & Related papers (2023-10-16T17:59:56Z) - Multi-Space Neural Radiance Fields [74.46513422075438]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects.
We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.
Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - NeRF++: Analyzing and Improving Neural Radiance Fields [117.73411181186088]
Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a variety of capture settings.
NeRF fits multi-layer perceptrons representing view-invariant opacity and view-dependent color volumes to a set of training images.
We address a parametrization issue involved in applying NeRF to 360 captures of objects within large-scale, 3D scenes.
arXiv Detail & Related papers (2020-10-15T03:24:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.