Geo-NI: Geometry-aware Neural Interpolation for Light Field Rendering
- URL: http://arxiv.org/abs/2206.09736v1
- Date: Mon, 20 Jun 2022 12:25:34 GMT
- Title: Geo-NI: Geometry-aware Neural Interpolation for Light Field Rendering
- Authors: Gaochang Wu and Yuemei Zhou and Yebin Liu and Lu Fang and Tianyou Chai
- Abstract summary: We present a Geometry-aware Neural Interpolation (Geo-NI) framework for light field rendering.
By combining the superiorities of NI and DIBR, the proposed Geo-NI is able to render views with large disparity.
- Score: 57.775678643512435
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we present a Geometry-aware Neural Interpolation (Geo-NI)
framework for light field rendering. Previous learning-based approaches either
rely on the capability of neural networks to perform direct interpolation,
which we dubbed Neural Interpolation (NI), or explore scene geometry for novel
view synthesis, also known as Depth Image-Based Rendering (DIBR). Instead, we
incorporate the ideas behind these two kinds of approaches by launching the NI
with a novel DIBR pipeline. Specifically, the proposed Geo-NI first performs NI
using input light field sheared by a set of depth hypotheses. Then the DIBR is
implemented by assigning the sheared light fields with a novel reconstruction
cost volume according to the reconstruction quality under different depth
hypotheses. The reconstruction cost is interpreted as a blending weight to
render the final output light field by blending the reconstructed light fields
along the dimension of depth hypothesis. By combining the superiorities of NI
and DIBR, the proposed Geo-NI is able to render views with large disparity with
the help of scene geometry while also reconstruct non-Lambertian effect when
depth is prone to be ambiguous. Extensive experiments on various datasets
demonstrate the superior performance of the proposed geometry-aware light field
rendering framework.
Related papers
- AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction [55.69271635843385]
We present AniSDF, a novel approach that learns fused-granularity neural surfaces with physics-based encoding for high-fidelity 3D reconstruction.
Our method boosts the quality of SDF-based methods by a great scale in both geometry reconstruction and novel-view synthesis.
arXiv Detail & Related papers (2024-10-02T03:10:38Z) - Depth Reconstruction with Neural Signed Distance Fields in Structured Light Systems [15.603880588503355]
We introduce a novel depth estimation technique for multi-frame structured light setups using neural implicit representations of 3D space.
Our approach employs a neural signed distance field (SDF), trained through self-supervised differentiable rendering.
arXiv Detail & Related papers (2024-05-20T13:24:35Z) - Improved Neural Radiance Fields Using Pseudo-depth and Fusion [18.088617888326123]
We propose constructing multi-scale encoding volumes and providing multi-scale geometry information to NeRF models.
To make the constructed volumes as close as possible to the surfaces of objects in the scene and the rendered depth more accurate, we propose to perform depth prediction and radiance field reconstruction simultaneously.
arXiv Detail & Related papers (2023-07-27T17:01:01Z) - NeuS-PIR: Learning Relightable Neural Surface using Pre-Integrated Rendering [23.482941494283978]
This paper presents a method, namely NeuS-PIR, for recovering relightable neural surfaces from multi-view images or video.
Unlike methods based on NeRF and discrete meshes, our method utilizes implicit neural surface representation to reconstruct high-quality geometry.
Our method enables advanced applications such as relighting, which can be seamlessly integrated with modern graphics engines.
arXiv Detail & Related papers (2023-06-13T09:02:57Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - NeILF++: Inter-Reflectable Light Fields for Geometry and Material
Estimation [36.09503501647977]
We formulate the lighting of a static scene as one neural incident light field (NeILF) and one outgoing neural radiance field (NeRF)
The proposed method is able to achieve state-of-the-art results in terms of geometry reconstruction quality, material estimation accuracy, and the fidelity of novel view rendering.
arXiv Detail & Related papers (2023-03-30T04:59:48Z) - MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface
Reconstruction [72.05649682685197]
State-of-the-art neural implicit methods allow for high-quality reconstructions of simple scenes from many input views.
This is caused primarily by the inherent ambiguity in the RGB reconstruction loss that does not provide enough constraints.
Motivated by recent advances in the area of monocular geometry prediction, we explore the utility these cues provide for improving neural implicit surface reconstruction.
arXiv Detail & Related papers (2022-06-01T17:58:15Z) - PVSeRF: Joint Pixel-, Voxel- and Surface-Aligned Radiance Field for
Single-Image Novel View Synthesis [52.546998369121354]
We present PVSeRF, a learning framework that reconstructs neural radiance fields from single-view RGB images.
We propose to incorporate explicit geometry reasoning and combine it with pixel-aligned features for radiance field prediction.
We show that the introduction of such geometry-aware features helps to achieve a better disentanglement between appearance and geometry.
arXiv Detail & Related papers (2022-02-10T07:39:47Z) - Light Field Reconstruction Using Convolutional Network on EPI and
Extended Applications [78.63280020581662]
A novel convolutional neural network (CNN)-based framework is developed for light field reconstruction from a sparse set of views.
We demonstrate the high performance and robustness of the proposed framework compared with state-of-the-art algorithms.
arXiv Detail & Related papers (2021-03-24T08:16:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.