Improving Neural Radiance Fields with Depth-aware Optimization for Novel
View Synthesis
- URL: http://arxiv.org/abs/2304.05218v2
- Date: Tue, 20 Feb 2024 02:23:46 GMT
- Title: Improving Neural Radiance Fields with Depth-aware Optimization for Novel
View Synthesis
- Authors: Shu Chen, Junyao Li, Yang Zhang, and Beiji Zou
- Abstract summary: We propose SfMNeRF, a method to better synthesize novel views as well as reconstruct the 3D-scene geometry.
SfMNeRF employs the epipolar, photometric consistency, depth smoothness, and position-of-matches constraints to explicitly reconstruct the 3D-scene structure.
Experiments on two public datasets demonstrate that SfMNeRF surpasses state-of-the-art approaches.
- Score: 12.3338393483795
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With dense inputs, Neural Radiance Fields (NeRF) is able to render
photo-realistic novel views under static conditions. Although the synthesis
quality is excellent, existing NeRF-based methods fail to obtain moderate
three-dimensional (3D) structures. The novel view synthesis quality drops
dramatically given sparse input due to the implicitly reconstructed inaccurate
3D-scene structure. We propose SfMNeRF, a method to better synthesize novel
views as well as reconstruct the 3D-scene geometry. SfMNeRF leverages the
knowledge from the self-supervised depth estimation methods to constrain the
3D-scene geometry during view synthesis training. Specifically, SfMNeRF employs
the epipolar, photometric consistency, depth smoothness, and
position-of-matches constraints to explicitly reconstruct the 3D-scene
structure. Through these explicit constraints and the implicit constraint from
NeRF, our method improves the view synthesis as well as the 3D-scene geometry
performance of NeRF at the same time. In addition, SfMNeRF synthesizes novel
sub-pixels in which the ground truth is obtained by image interpolation. This
strategy enables SfMNeRF to include more samples to improve generalization
performance. Experiments on two public datasets demonstrate that SfMNeRF
surpasses state-of-the-art approaches. Code is available at
https://github.com/XTU-PR-LAB/SfMNeRF
Related papers
- Denoising Diffusion via Image-Based Rendering [54.20828696348574]
We introduce the first diffusion model able to perform fast, detailed reconstruction and generation of real-world 3D scenes.
First, we introduce a new neural scene representation, IB-planes, that can efficiently and accurately represent large 3D scenes.
Second, we propose a denoising-diffusion framework to learn a prior over this novel 3D scene representation, using only 2D images.
arXiv Detail & Related papers (2024-02-05T19:00:45Z) - Re-Nerfing: Improving Novel View Synthesis through Novel View Synthesis [80.3686833921072]
Recent neural rendering and reconstruction techniques, such as NeRFs or Gaussian Splatting, have shown remarkable novel view synthesis capabilities.
With fewer images available, these methods start to fail since they can no longer correctly triangulate the underlying 3D geometry.
We propose Re-Nerfing, a simple and general add-on approach that leverages novel view synthesis itself to tackle this problem.
arXiv Detail & Related papers (2023-12-04T18:56:08Z) - NeRF-Det: Learning Geometry-Aware Volumetric Representation for
Multi-View 3D Object Detection [65.02633277884911]
We present NeRF-Det, a novel method for indoor 3D detection with posed RGB images as input.
Our method makes use of NeRF in an end-to-end manner to explicitly estimate 3D geometry, thereby improving 3D detection performance.
arXiv Detail & Related papers (2023-07-27T04:36:16Z) - NeRF synthesis with shading guidance [16.115903198836698]
We propose a new task called NeRF synthesis that utilizes the structural content of a NeRF patch to construct a new radiance field of large size.
We have demonstrated that our method can generate high-quality results with consistent geometry and appearance, even for scenes with complex lighting.
arXiv Detail & Related papers (2023-06-20T14:18:20Z) - NeRFMeshing: Distilling Neural Radiance Fields into
Geometrically-Accurate 3D Meshes [56.31855837632735]
We propose a compact and flexible architecture that enables easy 3D surface reconstruction from any NeRF-driven approach.
Our final 3D mesh is physically accurate and can be rendered in real time on an array of devices.
arXiv Detail & Related papers (2023-03-16T16:06:03Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - Semantic 3D-aware Portrait Synthesis and Manipulation Based on
Compositional Neural Radiance Field [55.431697263581626]
We propose a Compositional Neural Radiance Field (CNeRF) for semantic 3D-aware portrait synthesis and manipulation.
CNeRF divides the image by semantic regions and learns an independent neural radiance field for each region, and finally fuses them and renders the complete image.
Compared to state-of-the-art 3D-aware GAN methods, our approach enables fine-grained semantic region manipulation, while maintaining high-quality 3D-consistent synthesis.
arXiv Detail & Related papers (2023-02-03T07:17:46Z) - Learning Neural Radiance Fields from Multi-View Geometry [1.1011268090482573]
We present a framework, called MVG-NeRF, that combines Multi-View Geometry algorithms and Neural Radiance Fields (NeRF) for image-based 3D reconstruction.
NeRF has revolutionized the field of implicit 3D representations, mainly due to a differentiable rendering formulation that enables high-quality and geometry-aware novel view synthesis.
arXiv Detail & Related papers (2022-10-24T08:53:35Z) - StructNeRF: Neural Radiance Fields for Indoor Scenes with Structural
Hints [23.15914545835831]
StructNeRF is a solution to novel view synthesis for indoor scenes with sparse inputs.
Our method improves both the geometry and the view synthesis performance of NeRF without any additional training on external data.
arXiv Detail & Related papers (2022-09-12T14:33:27Z) - 3D-aware Image Synthesis via Learning Structural and Textural
Representations [39.681030539374994]
We propose VolumeGAN, for high-fidelity 3D-aware image synthesis, through explicitly learning a structural representation and a textural representation.
Our approach achieves sufficiently higher image quality and better 3D control than the previous methods.
arXiv Detail & Related papers (2021-12-20T18:59:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.