OmniNeRF: Hybriding Omnidirectional Distance and Radiance fields for
Neural Surface Reconstruction
- URL: http://arxiv.org/abs/2209.13433v1
- Date: Tue, 27 Sep 2022 14:39:23 GMT
- Title: OmniNeRF: Hybriding Omnidirectional Distance and Radiance fields for
Neural Surface Reconstruction
- Authors: Jiaming Shen, Bolin Song, Zirui Wu, Yi Xu
- Abstract summary: Ground-breaking research in the neural radiance field (NeRF) has dramatically improved the representation quality of 3D objects.
Some later studies improved NeRF by building truncated signed distance fields (TSDFs) but still suffer from the problem of blurred surfaces in 3D reconstruction.
In this work, this surface ambiguity is addressed by proposing a novel way of 3D shape representation, OmniNeRF.
- Score: 22.994952933576684
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D reconstruction from images has wide applications in Virtual Reality and
Automatic Driving, where the precision requirement is very high.
Ground-breaking research in the neural radiance field (NeRF) by utilizing
Multi-Layer Perceptions has dramatically improved the representation quality of
3D objects. Some later studies improved NeRF by building truncated signed
distance fields (TSDFs) but still suffer from the problem of blurred surfaces
in 3D reconstruction. In this work, this surface ambiguity is addressed by
proposing a novel way of 3D shape representation, OmniNeRF. It is based on
training a hybrid implicit field of Omni-directional Distance Field (ODF) and
neural radiance field, replacing the apparent density in NeRF with
omnidirectional information. Moreover, we introduce additional supervision on
the depth map to further improve reconstruction quality. The proposed method
has been proven to effectively deal with NeRF defects at the edges of the
surface reconstruction, providing higher quality 3D scene reconstruction
results.
Related papers
- RaNeuS: Ray-adaptive Neural Surface Reconstruction [87.20343320266215]
We leverage a differentiable radiance field eg NeRF to reconstruct detailed 3D surfaces in addition to producing novel view renderings.
Considering that different methods formulate and optimize the projection from SDF to radiance field with a globally constant Eikonal regularization, we improve with a ray-wise weighting factor.
Our proposed textitRaNeuS are extensively evaluated on both synthetic and real datasets.
arXiv Detail & Related papers (2024-06-14T07:54:25Z) - Mesh2NeRF: Direct Mesh Supervision for Neural Radiance Field Representation and Generation [51.346733271166926]
Mesh2NeRF is an approach to derive ground-truth radiance fields from textured meshes for 3D generation tasks.
We validate the effectiveness of Mesh2NeRF across various tasks.
arXiv Detail & Related papers (2024-03-28T11:22:53Z) - Improving Neural Radiance Field using Near-Surface Sampling with Point Cloud Generation [6.506009070668646]
This paper proposes a near-surface sampling framework to improve the rendering quality of NeRF.
To obtain depth information on a novel view, the paper proposes a 3D point cloud generation method and a simple refining method for projected depth from a point cloud.
arXiv Detail & Related papers (2023-10-06T10:55:34Z) - SurfelNeRF: Neural Surfel Radiance Fields for Online Photorealistic
Reconstruction of Indoor Scenes [17.711755550841385]
SLAM-based methods can reconstruct 3D scene geometry progressively in real time but can not render photorealistic results.
NeRF-based methods produce promising novel view synthesis results, their long offline optimization time and lack of geometric constraints pose challenges to efficiently handling online input.
We introduce SurfelNeRF, a variant of neural radiance field which employs a flexible and scalable neural surfel representation to store geometric attributes and extracted appearance features from input images.
arXiv Detail & Related papers (2023-04-18T13:11:49Z) - DehazeNeRF: Multiple Image Haze Removal and 3D Shape Reconstruction
using Neural Radiance Fields [56.30120727729177]
We introduce DehazeNeRF as a framework that robustly operates in hazy conditions.
We demonstrate successful multi-view haze removal, novel view synthesis, and 3D shape reconstruction where existing approaches fail.
arXiv Detail & Related papers (2023-03-20T18:03:32Z) - NeRFMeshing: Distilling Neural Radiance Fields into
Geometrically-Accurate 3D Meshes [56.31855837632735]
We propose a compact and flexible architecture that enables easy 3D surface reconstruction from any NeRF-driven approach.
Our final 3D mesh is physically accurate and can be rendered in real time on an array of devices.
arXiv Detail & Related papers (2023-03-16T16:06:03Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - Learning Neural Radiance Fields from Multi-View Geometry [1.1011268090482573]
We present a framework, called MVG-NeRF, that combines Multi-View Geometry algorithms and Neural Radiance Fields (NeRF) for image-based 3D reconstruction.
NeRF has revolutionized the field of implicit 3D representations, mainly due to a differentiable rendering formulation that enables high-quality and geometry-aware novel view synthesis.
arXiv Detail & Related papers (2022-10-24T08:53:35Z) - UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for
Multi-View Reconstruction [61.17219252031391]
We present a novel method for reconstructing surfaces from multi-view images using Neural implicit 3D representations.
Our key insight is that implicit surface models and radiance fields can be formulated in a unified way, enabling both surface and volume rendering.
Our experiments demonstrate that we outperform NeRF in terms of reconstruction quality while performing on par with IDR without requiring masks.
arXiv Detail & Related papers (2021-04-20T15:59:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.