PVSeRF: Joint Pixel-, Voxel- and Surface-Aligned Radiance Field for
Single-Image Novel View Synthesis
- URL: http://arxiv.org/abs/2202.04879v1
- Date: Thu, 10 Feb 2022 07:39:47 GMT
- Title: PVSeRF: Joint Pixel-, Voxel- and Surface-Aligned Radiance Field for
Single-Image Novel View Synthesis
- Authors: Xianggang Yu, Jiapeng Tang, Yipeng Qin, Chenghong Li, Linchao Bao,
Xiaoguang Han, Shuguang Cui
- Abstract summary: We present PVSeRF, a learning framework that reconstructs neural radiance fields from single-view RGB images.
We propose to incorporate explicit geometry reasoning and combine it with pixel-aligned features for radiance field prediction.
We show that the introduction of such geometry-aware features helps to achieve a better disentanglement between appearance and geometry.
- Score: 52.546998369121354
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present PVSeRF, a learning framework that reconstructs neural radiance
fields from single-view RGB images, for novel view synthesis. Previous
solutions, such as pixelNeRF, rely only on pixel-aligned features and suffer
from feature ambiguity issues. As a result, they struggle with the
disentanglement of geometry and appearance, leading to implausible geometries
and blurry results. To address this challenge, we propose to incorporate
explicit geometry reasoning and combine it with pixel-aligned features for
radiance field prediction. Specifically, in addition to pixel-aligned features,
we further constrain the radiance field learning to be conditioned on i)
voxel-aligned features learned from a coarse volumetric grid and ii) fine
surface-aligned features extracted from a regressed point cloud. We show that
the introduction of such geometry-aware features helps to achieve a better
disentanglement between appearance and geometry, i.e. recovering more accurate
geometries and synthesizing higher quality images of novel views. Extensive
experiments against state-of-the-art methods on ShapeNet benchmarks demonstrate
the superiority of our approach for single-image novel view synthesis.
Related papers
- AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction [55.69271635843385]
We present AniSDF, a novel approach that learns fused-granularity neural surfaces with physics-based encoding for high-fidelity 3D reconstruction.
Our method boosts the quality of SDF-based methods by a great scale in both geometry reconstruction and novel-view synthesis.
arXiv Detail & Related papers (2024-10-02T03:10:38Z) - NePF: Neural Photon Field for Single-Stage Inverse Rendering [6.977356702921476]
We present a novel single-stage framework, Neural Photon Field (NePF), to address the ill-posed inverse rendering from multi-view images.
NePF achieves this unification by fully utilizing the physical implication behind the weight function of neural implicit surfaces.
We evaluate our method on both real and synthetic datasets.
arXiv Detail & Related papers (2023-11-20T06:15:46Z) - NeuS-PIR: Learning Relightable Neural Surface using Pre-Integrated Rendering [23.482941494283978]
This paper presents a method, namely NeuS-PIR, for recovering relightable neural surfaces from multi-view images or video.
Unlike methods based on NeRF and discrete meshes, our method utilizes implicit neural surface representation to reconstruct high-quality geometry.
Our method enables advanced applications such as relighting, which can be seamlessly integrated with modern graphics engines.
arXiv Detail & Related papers (2023-06-13T09:02:57Z) - GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from
Multi-view Images [79.39247661907397]
We introduce an effective framework Generalizable Model-based Neural Radiance Fields to synthesize free-viewpoint images.
Specifically, we propose a geometry-guided attention mechanism to register the appearance code from multi-view 2D images to a geometry proxy.
arXiv Detail & Related papers (2023-03-24T03:32:02Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - GARF:Geometry-Aware Generalized Neural Radiance Field [47.76524984421343]
We propose Geometry-Aware Generalized Neural Radiance Field (GARF) with a geometry-aware dynamic sampling (GADS) strategy.
Our framework infers the unseen scenes on both pixel-scale and geometry-scale with only a few input images.
Experiments on indoor and outdoor datasets show that GARF reduces samples by more than 25%, while improving rendering quality and 3D geometry estimation.
arXiv Detail & Related papers (2022-12-05T14:00:59Z) - Multi-view 3D Reconstruction of a Texture-less Smooth Surface of Unknown
Generic Reflectance [86.05191217004415]
Multi-view reconstruction of texture-less objects with unknown surface reflectance is a challenging task.
This paper proposes a simple and robust solution to this problem based on a co-light scanner.
arXiv Detail & Related papers (2021-05-25T01:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.