Improved Neural Radiance Fields Using Pseudo-depth and Fusion
- URL: http://arxiv.org/abs/2308.03772v1
- Date: Thu, 27 Jul 2023 17:01:01 GMT
- Title: Improved Neural Radiance Fields Using Pseudo-depth and Fusion
- Authors: Jingliang Li, Qiang Zhou, Chaohui Yu, Zhengda Lu, Jun Xiao, Zhibin
Wang, Fan Wang
- Abstract summary: We propose constructing multi-scale encoding volumes and providing multi-scale geometry information to NeRF models.
To make the constructed volumes as close as possible to the surfaces of objects in the scene and the rendered depth more accurate, we propose to perform depth prediction and radiance field reconstruction simultaneously.
- Score: 18.088617888326123
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Since the advent of Neural Radiance Fields, novel view synthesis has received
tremendous attention. The existing approach for the generalization of radiance
field reconstruction primarily constructs an encoding volume from nearby source
images as additional inputs. However, these approaches cannot efficiently
encode the geometric information of real scenes with various scale
objects/structures. In this work, we propose constructing multi-scale encoding
volumes and providing multi-scale geometry information to NeRF models. To make
the constructed volumes as close as possible to the surfaces of objects in the
scene and the rendered depth more accurate, we propose to perform depth
prediction and radiance field reconstruction simultaneously. The predicted
depth map will be used to supervise the rendered depth, narrow the depth range,
and guide points sampling. Finally, the geometric information contained in
point volume features may be inaccurate due to occlusion, lighting, etc. To
this end, we propose enhancing the point volume feature from depth-guided
neighbor feature fusion. Experiments demonstrate the superior performance of
our method in both novel view synthesis and dense geometry modeling without
per-scene optimization.
Related papers
- Refinement of Monocular Depth Maps via Multi-View Differentiable Rendering [4.717325308876748]
We present a novel approach to generate view consistent and detailed depth maps from a number of posed images.
We leverage advances in monocular depth estimation, which generate topologically complete, but metrically inaccurate depth maps.
Our method is able to generate dense, detailed, high-quality depth maps, also in challenging indoor scenarios, and outperforms state-of-the-art depth reconstruction approaches.
arXiv Detail & Related papers (2024-10-04T18:50:28Z) - DeLiRa: Self-Supervised Depth, Light, and Radiance Fields [32.350984950639656]
Differentiable volumetric rendering is a powerful paradigm for 3D reconstruction and novel view synthesis.
Standard volume rendering approaches struggle with degenerate geometries in the case of limited viewpoint diversity.
In this work, we propose to use the multi-view photometric objective as a geometric regularizer for volumetric rendering.
arXiv Detail & Related papers (2023-04-06T00:16:25Z) - Geo-NI: Geometry-aware Neural Interpolation for Light Field Rendering [57.775678643512435]
We present a Geometry-aware Neural Interpolation (Geo-NI) framework for light field rendering.
By combining the superiorities of NI and DIBR, the proposed Geo-NI is able to render views with large disparity.
arXiv Detail & Related papers (2022-06-20T12:25:34Z) - GeoNeRF: Generalizing NeRF with Geometry Priors [2.578242050187029]
We present GeoNeRF, a generalizable photorealistic novel view method based on neural radiance fields.
Our approach consists of two main stages: a geometry reasoner and a synthesis.
Experiments show that GeoNeRF outperforms state-of-the-art generalizable neural rendering models on various synthetic and real datasets.
arXiv Detail & Related papers (2021-11-26T15:15:37Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Volume Rendering of Neural Implicit Surfaces [57.802056954935495]
This paper aims to improve geometry representation and reconstruction in neural volume rendering.
We achieve that by modeling the volume density as a function of the geometry.
Applying this new density representation to challenging scene multiview datasets produced high quality geometry reconstructions.
arXiv Detail & Related papers (2021-06-22T20:23:16Z) - Light Field Reconstruction Using Convolutional Network on EPI and
Extended Applications [78.63280020581662]
A novel convolutional neural network (CNN)-based framework is developed for light field reconstruction from a sparse set of views.
We demonstrate the high performance and robustness of the proposed framework compared with state-of-the-art algorithms.
arXiv Detail & Related papers (2021-03-24T08:16:32Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z) - DiverseDepth: Affine-invariant Depth Prediction Using Diverse Data [110.29043712400912]
We present a method for depth estimation with monocular images, which can predict high-quality depth on diverse scenes up to an affine transformation.
Experiments show that our method outperforms previous methods on 8 datasets by a large margin with the zero-shot test setting.
arXiv Detail & Related papers (2020-02-03T05:38:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.