VDN-NeRF: Resolving Shape-Radiance Ambiguity via View-Dependence
Normalization
- URL: http://arxiv.org/abs/2303.17968v1
- Date: Fri, 31 Mar 2023 11:13:17 GMT
- Title: VDN-NeRF: Resolving Shape-Radiance Ambiguity via View-Dependence
Normalization
- Authors: Bingfan Zhu, Yanchao Yang, Xulong Wang, Youyi Zheng, Leonidas Guibas
- Abstract summary: VDN-NeRF is a method to train neural radiance fields (NeRFs) for better geometry under non-Lambertian surface and dynamic lighting conditions.
We develop a technique that normalizes the view-dependence by distilling invariant information already encoded in the learned NeRFs.
- Score: 12.903147173026968
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose VDN-NeRF, a method to train neural radiance fields (NeRFs) for
better geometry under non-Lambertian surface and dynamic lighting conditions
that cause significant variation in the radiance of a point when viewed from
different angles. Instead of explicitly modeling the underlying factors that
result in the view-dependent phenomenon, which could be complex yet not
inclusive, we develop a simple and effective technique that normalizes the
view-dependence by distilling invariant information already encoded in the
learned NeRFs. We then jointly train NeRFs for view synthesis with
view-dependence normalization to attain quality geometry. Our experiments show
that even though shape-radiance ambiguity is inevitable, the proposed
normalization can minimize its effect on geometry, which essentially aligns the
optimal capacity needed for explaining view-dependent variations. Our method
applies to various baselines and significantly improves geometry without
changing the volume rendering pipeline, even if the data is captured under a
moving light source. Code is available at: https://github.com/BoifZ/VDN-NeRF.
Related papers
- PBR-NeRF: Inverse Rendering with Physics-Based Neural Fields [49.6405458373509]
We present an inverse rendering (IR) model capable of jointly estimating scene geometry, materials, and illumination.
Our method is easily adaptable to other inverse rendering and 3D reconstruction frameworks that require material estimation.
arXiv Detail & Related papers (2024-12-12T19:00:21Z) - Surf-NeRF: Surface Regularised Neural Radiance Fields [3.830184399033188]
We show how curriculum learning of a surface light field model helps a NeRF converge towards a more geometrically accurate scene representation.
Our approach yields improvements of 14.4% to normals on positionally encoded NeRFs and 9.2% on grid-based models.
arXiv Detail & Related papers (2024-11-27T03:18:02Z) - RaNeuS: Ray-adaptive Neural Surface Reconstruction [87.20343320266215]
We leverage a differentiable radiance field eg NeRF to reconstruct detailed 3D surfaces in addition to producing novel view renderings.
Considering that different methods formulate and optimize the projection from SDF to radiance field with a globally constant Eikonal regularization, we improve with a ray-wise weighting factor.
Our proposed textitRaNeuS are extensively evaluated on both synthetic and real datasets.
arXiv Detail & Related papers (2024-06-14T07:54:25Z) - PNeRFLoc: Visual Localization with Point-based Neural Radiance Fields [54.8553158441296]
We propose a novel visual localization framework, ie, PNeRFLoc, based on a unified point-based representation.
On the one hand, PNeRFLoc supports the initial pose estimation by matching 2D and 3D feature points.
On the other hand, it also enables pose refinement with novel view synthesis using rendering-based optimization.
arXiv Detail & Related papers (2023-12-17T08:30:00Z) - Rethinking Directional Integration in Neural Radiance Fields [8.012147983948665]
We introduce a modification to the NeRF rendering equation which is as simple as a few lines of code change for any NeRF variations.
We show that the modified equation can be interpreted as light field rendering with learned ray embeddings.
arXiv Detail & Related papers (2023-11-28T18:59:50Z) - NeRFVS: Neural Radiance Fields for Free View Synthesis via Geometry
Scaffolds [60.1382112938132]
We present NeRFVS, a novel neural radiance fields (NeRF) based method to enable free navigation in a room.
NeRF achieves impressive performance in rendering images for novel views similar to the input views while suffering for novel views that are significantly different from the training views.
arXiv Detail & Related papers (2023-04-13T06:40:08Z) - GeCoNeRF: Few-shot Neural Radiance Fields via Geometric Consistency [31.22435282922934]
We present a novel framework to regularize Neural Radiance Field (NeRF) in a few-shot setting with a geometry-aware consistency regularization.
We show that our model achieves competitive results compared to state-of-the-art few-shot NeRF models.
arXiv Detail & Related papers (2023-01-26T05:14:12Z) - Estimating Neural Reflectance Field from Radiance Field using Tree
Structures [29.431165709718794]
We present a new method for estimating the Neural Reflectance Field (NReF) of an object from a set of posed multi-view images under unknown lighting.
NReF represents 3D geometry and appearance of objects in a disentangled manner, and are hard to be estimated from images only.
Our method solves this problem by exploiting the Neural Radiance Field (NeRF) as a proxy representation, from which we perform further decomposition.
arXiv Detail & Related papers (2022-10-09T10:21:31Z) - PVSeRF: Joint Pixel-, Voxel- and Surface-Aligned Radiance Field for
Single-Image Novel View Synthesis [52.546998369121354]
We present PVSeRF, a learning framework that reconstructs neural radiance fields from single-view RGB images.
We propose to incorporate explicit geometry reasoning and combine it with pixel-aligned features for radiance field prediction.
We show that the introduction of such geometry-aware features helps to achieve a better disentanglement between appearance and geometry.
arXiv Detail & Related papers (2022-02-10T07:39:47Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance
Fields [40.72851892972173]
We introduce Ref-NeRF, which replaces NeRF's parameterization of view-dependent outgoing radiance with a representation of reflected radiance and structures.
We show that our model's internal representation of outgoing radiance is interpretable and useful for scene editing.
arXiv Detail & Related papers (2021-12-07T18:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.