GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from
Multi-view Images
- URL: http://arxiv.org/abs/2303.13777v1
- Date: Fri, 24 Mar 2023 03:32:02 GMT
- Title: GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from
Multi-view Images
- Authors: Jianchuan Chen, Wentao Yi, Liqian Ma, Xu Jia, Huchuan Lu
- Abstract summary: We introduce an effective framework Generalizable Model-based Neural Radiance Fields to synthesize free-viewpoint images.
Specifically, we propose a geometry-guided attention mechanism to register the appearance code from multi-view 2D images to a geometry proxy.
- Score: 79.39247661907397
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we focus on synthesizing high-fidelity novel view images for
arbitrary human performers, given a set of sparse multi-view images. It is a
challenging task due to the large variation among articulated body poses and
heavy self-occlusions. To alleviate this, we introduce an effective
generalizable framework Generalizable Model-based Neural Radiance Fields
(GM-NeRF) to synthesize free-viewpoint images. Specifically, we propose a
geometry-guided attention mechanism to register the appearance code from
multi-view 2D images to a geometry proxy which can alleviate the misalignment
between inaccurate geometry prior and pixel space. On top of that, we further
conduct neural rendering and partial gradient backpropagation for efficient
perceptual supervision and improvement of the perceptual quality of synthesis.
To evaluate our method, we conduct experiments on synthesized datasets
THuman2.0 and Multi-garment, and real-world datasets Genebody and ZJUMocap. The
results demonstrate that our approach outperforms state-of-the-art methods in
terms of novel view synthesis and geometric reconstruction.
Related papers
- G-NeRF: Geometry-enhanced Novel View Synthesis from Single-View Images [45.66479596827045]
We propose a Geometry-enhanced NeRF (G-NeRF), which seeks to enhance the geometry priors by a geometry-guided multi-view synthesis approach.
To tackle the absence of multi-view supervision for single-view images, we design the depth-aware training approach.
arXiv Detail & Related papers (2024-04-11T04:58:18Z) - Learning Robust Generalizable Radiance Field with Visibility and Feature
Augmented Point Representation [7.203073346844801]
This paper introduces a novel paradigm for the generalizable neural radiance field (NeRF)
We propose the first paradigm that constructs the generalizable neural field based on point-based rather than image-based rendering.
Our approach explicitly models visibilities by geometric priors and augments them with neural features.
arXiv Detail & Related papers (2024-01-25T17:58:51Z) - TriHuman : A Real-time and Controllable Tri-plane Representation for
Detailed Human Geometry and Appearance Synthesis [76.73338151115253]
TriHuman is a novel human-tailored, deformable, and efficient tri-plane representation.
We non-rigidly warp global ray samples into our undeformed tri-plane texture space.
We show how such a tri-plane feature representation can be conditioned on the skeletal motion to account for dynamic appearance and geometry changes.
arXiv Detail & Related papers (2023-12-08T16:40:38Z) - HandNeRF: Neural Radiance Fields for Animatable Interacting Hands [122.32855646927013]
We propose a novel framework to reconstruct accurate appearance and geometry with neural radiance fields (NeRF) for interacting hands.
We conduct extensive experiments to verify the merits of our proposed HandNeRF and report a series of state-of-the-art results.
arXiv Detail & Related papers (2023-03-24T06:19:19Z) - Generalizable Neural Performer: Learning Robust Radiance Fields for
Human Novel View Synthesis [52.720314035084215]
This work targets at using a general deep learning framework to synthesize free-viewpoint images of arbitrary human performers.
We present a simple yet powerful framework, named Generalizable Neural Performer (GNR), that learns a generalizable and robust neural body representation.
Experiments on GeneBody-1.0 and ZJU-Mocap show better robustness of our methods than recent state-of-the-art generalizable methods.
arXiv Detail & Related papers (2022-04-25T17:14:22Z) - PVSeRF: Joint Pixel-, Voxel- and Surface-Aligned Radiance Field for
Single-Image Novel View Synthesis [52.546998369121354]
We present PVSeRF, a learning framework that reconstructs neural radiance fields from single-view RGB images.
We propose to incorporate explicit geometry reasoning and combine it with pixel-aligned features for radiance field prediction.
We show that the introduction of such geometry-aware features helps to achieve a better disentanglement between appearance and geometry.
arXiv Detail & Related papers (2022-02-10T07:39:47Z) - DD-NeRF: Double-Diffusion Neural Radiance Field as a Generalizable
Implicit Body Representation [17.29933848598768]
We present DD-NeRF, a novel generalizable implicit field for representing human body geometry and appearance from arbitrary input views.
Experiments on various datasets show that the proposed approach outperforms previous works in both geometry reconstruction and novel view synthesis quality.
arXiv Detail & Related papers (2021-12-23T07:30:22Z) - GeoNeRF: Generalizing NeRF with Geometry Priors [2.578242050187029]
We present GeoNeRF, a generalizable photorealistic novel view method based on neural radiance fields.
Our approach consists of two main stages: a geometry reasoner and a synthesis.
Experiments show that GeoNeRF outperforms state-of-the-art generalizable neural rendering models on various synthetic and real datasets.
arXiv Detail & Related papers (2021-11-26T15:15:37Z) - Hierarchy Composition GAN for High-fidelity Image Synthesis [57.32311953820988]
This paper presents an innovative Hierarchical Composition GAN (HIC-GAN)
HIC-GAN incorporates image synthesis in geometry and appearance domains into an end-to-end trainable network.
Experiments on scene text image synthesis, portrait editing and indoor rendering tasks show that the proposed HIC-GAN achieves superior synthesis performance qualitatively and quantitatively.
arXiv Detail & Related papers (2019-05-12T11:11:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.