Surf-NeRF: Surface Regularised Neural Radiance Fields
- URL: http://arxiv.org/abs/2411.18652v1
- Date: Wed, 27 Nov 2024 03:18:02 GMT
- Title: Surf-NeRF: Surface Regularised Neural Radiance Fields
- Authors: Jack Naylor, Viorela Ila, Donald G. Dansereau,
- Abstract summary: We show how curriculum learning of a surface light field model helps a NeRF converge towards a more geometrically accurate scene representation.
Our approach yields improvements of 14.4% to normals on positionally encoded NeRFs and 9.2% on grid-based models.
- Score: 3.830184399033188
- License:
- Abstract: Neural Radiance Fields (NeRFs) provide a high fidelity, continuous scene representation that can realistically represent complex behaviour of light. Despite recent works like Ref-NeRF improving geometry through physics-inspired models, the ability for a NeRF to overcome shape-radiance ambiguity and converge to a representation consistent with real geometry remains limited. We demonstrate how curriculum learning of a surface light field model helps a NeRF converge towards a more geometrically accurate scene representation. We introduce four additional regularisation terms to impose geometric smoothness, consistency of normals and a separation of Lambertian and specular appearance at geometry in the scene, conforming to physical models. Our approach yields improvements of 14.4% to normals on positionally encoded NeRFs and 9.2% on grid-based models compared to current reflection-based NeRF variants. This includes a separated view-dependent appearance, conditioning a NeRF to have a geometric representation consistent with the captured scene. We demonstrate compatibility of our method with existing NeRF variants, as a key step in enabling radiance-based representations for geometry critical applications.
Related papers
- AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction [55.69271635843385]
We present AniSDF, a novel approach that learns fused-granularity neural surfaces with physics-based encoding for high-fidelity 3D reconstruction.
Our method boosts the quality of SDF-based methods by a great scale in both geometry reconstruction and novel-view synthesis.
arXiv Detail & Related papers (2024-10-02T03:10:38Z) - RaNeuS: Ray-adaptive Neural Surface Reconstruction [87.20343320266215]
We leverage a differentiable radiance field eg NeRF to reconstruct detailed 3D surfaces in addition to producing novel view renderings.
Considering that different methods formulate and optimize the projection from SDF to radiance field with a globally constant Eikonal regularization, we improve with a ray-wise weighting factor.
Our proposed textitRaNeuS are extensively evaluated on both synthetic and real datasets.
arXiv Detail & Related papers (2024-06-14T07:54:25Z) - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - Dynamic Mesh-Aware Radiance Fields [75.59025151369308]
This paper designs a two-way coupling between mesh and NeRF during rendering and simulation.
We show that a hybrid system approach outperforms alternatives in visual realism for mesh insertion.
arXiv Detail & Related papers (2023-09-08T20:18:18Z) - VDN-NeRF: Resolving Shape-Radiance Ambiguity via View-Dependence
Normalization [12.903147173026968]
VDN-NeRF is a method to train neural radiance fields (NeRFs) for better geometry under non-Lambertian surface and dynamic lighting conditions.
We develop a technique that normalizes the view-dependence by distilling invariant information already encoded in the learned NeRFs.
arXiv Detail & Related papers (2023-03-31T11:13:17Z) - DehazeNeRF: Multiple Image Haze Removal and 3D Shape Reconstruction
using Neural Radiance Fields [56.30120727729177]
We introduce DehazeNeRF as a framework that robustly operates in hazy conditions.
We demonstrate successful multi-view haze removal, novel view synthesis, and 3D shape reconstruction where existing approaches fail.
arXiv Detail & Related papers (2023-03-20T18:03:32Z) - GeCoNeRF: Few-shot Neural Radiance Fields via Geometric Consistency [31.22435282922934]
We present a novel framework to regularize Neural Radiance Field (NeRF) in a few-shot setting with a geometry-aware consistency regularization.
We show that our model achieves competitive results compared to state-of-the-art few-shot NeRF models.
arXiv Detail & Related papers (2023-01-26T05:14:12Z) - NeRF, meet differential geometry! [10.269997499911668]
We show how differential geometry can provide regularization tools for robustly training NeRF-like models.
We show how these tools yield a direct mathematical formalism of previously proposed NeRF variants aimed at improving the performance in challenging conditions.
arXiv Detail & Related papers (2022-06-29T22:45:34Z) - PVSeRF: Joint Pixel-, Voxel- and Surface-Aligned Radiance Field for
Single-Image Novel View Synthesis [52.546998369121354]
We present PVSeRF, a learning framework that reconstructs neural radiance fields from single-view RGB images.
We propose to incorporate explicit geometry reasoning and combine it with pixel-aligned features for radiance field prediction.
We show that the introduction of such geometry-aware features helps to achieve a better disentanglement between appearance and geometry.
arXiv Detail & Related papers (2022-02-10T07:39:47Z) - Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance
Fields [40.72851892972173]
We introduce Ref-NeRF, which replaces NeRF's parameterization of view-dependent outgoing radiance with a representation of reflected radiance and structures.
We show that our model's internal representation of outgoing radiance is interpretable and useful for scene editing.
arXiv Detail & Related papers (2021-12-07T18:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.