3D Visibility-aware Generalizable Neural Radiance Fields for Interacting
Hands
- URL: http://arxiv.org/abs/2401.00979v1
- Date: Tue, 2 Jan 2024 00:42:06 GMT
- Title: 3D Visibility-aware Generalizable Neural Radiance Fields for Interacting
Hands
- Authors: Xuan Huang, Hanhui Li, Zejun Yang, Zhisheng Wang, Xiaodan Liang
- Abstract summary: Neural radiance fields (NeRFs) are promising 3D representations for scenes, objects, and humans.
This paper proposes a generalizable visibility-aware NeRF framework for interacting hands.
Experiments on the Interhand2.6M dataset demonstrate that our proposed VA-NeRF outperforms conventional NeRFs significantly.
- Score: 51.305421495638434
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural radiance fields (NeRFs) are promising 3D representations for scenes,
objects, and humans. However, most existing methods require multi-view inputs
and per-scene training, which limits their real-life applications. Moreover,
current methods focus on single-subject cases, leaving scenes of interacting
hands that involve severe inter-hand occlusions and challenging view variations
remain unsolved. To tackle these issues, this paper proposes a generalizable
visibility-aware NeRF (VA-NeRF) framework for interacting hands. Specifically,
given an image of interacting hands as input, our VA-NeRF first obtains a
mesh-based representation of hands and extracts their corresponding geometric
and textural features. Subsequently, a feature fusion module that exploits the
visibility of query points and mesh vertices is introduced to adaptively merge
features of both hands, enabling the recovery of features in unseen areas.
Additionally, our VA-NeRF is optimized together with a novel discriminator
within an adversarial learning paradigm. In contrast to conventional
discriminators that predict a single real/fake label for the synthesized image,
the proposed discriminator generates a pixel-wise visibility map, providing
fine-grained supervision for unseen areas and encouraging the VA-NeRF to
improve the visual quality of synthesized images. Experiments on the
Interhand2.6M dataset demonstrate that our proposed VA-NeRF outperforms
conventional NeRFs significantly. Project Page:
\url{https://github.com/XuanHuang0/VANeRF}.
Related papers
- GMT: Enhancing Generalizable Neural Rendering via Geometry-Driven Multi-Reference Texture Transfer [40.70828307740121]
Novel view synthesis (NVS) aims to generate images at arbitrary viewpoints using multi-view images, and recent insights from neural radiance fields (NeRF) have contributed to remarkable improvements.
G-NeRF still struggles in representing fine details for a specific scene due to the absence of per-scene optimization.
We propose a Geometry-driven Multi-reference Texture transfer network (GMT) available as a plug-and-play module designed for G-NeRF.
arXiv Detail & Related papers (2024-10-01T13:30:51Z) - NeRF-DetS: Enhancing Multi-View 3D Object Detection with Sampling-adaptive Network of Continuous NeRF-based Representation [60.47114985993196]
NeRF-Det unifies the tasks of novel view arithmetic and 3D perception.
We introduce a novel 3D perception network structure, NeRF-DetS.
NeRF-DetS outperforms competitive NeRF-Det on the ScanNetV2 dataset.
arXiv Detail & Related papers (2024-04-22T06:59:03Z) - NeRF-VPT: Learning Novel View Representations with Neural Radiance
Fields via View Prompt Tuning [63.39461847093663]
We propose NeRF-VPT, an innovative method for novel view synthesis to address these challenges.
Our proposed NeRF-VPT employs a cascading view prompt tuning paradigm, wherein RGB information gained from preceding rendering outcomes serves as instructive visual prompts for subsequent rendering stages.
NeRF-VPT only requires sampling RGB data from previous stage renderings as priors at each training stage, without relying on extra guidance or complex techniques.
arXiv Detail & Related papers (2024-03-02T22:08:10Z) - Mask-Based Modeling for Neural Radiance Fields [20.728248301818912]
In this work, we unveil that 3D implicit representation learning can be significantly improved by mask-based modeling.
We propose MRVM-NeRF, which is a self-supervised pretraining target to predict complete scene representations from partially masked features along each ray.
With this pretraining target, MRVM-NeRF enables better use of correlations across different points and views as the geometry priors.
arXiv Detail & Related papers (2023-04-11T04:12:31Z) - HandNeRF: Neural Radiance Fields for Animatable Interacting Hands [122.32855646927013]
We propose a novel framework to reconstruct accurate appearance and geometry with neural radiance fields (NeRF) for interacting hands.
We conduct extensive experiments to verify the merits of our proposed HandNeRF and report a series of state-of-the-art results.
arXiv Detail & Related papers (2023-03-24T06:19:19Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level
Physically-Grounded Augmentations [111.08941206369508]
We propose Augmented NeRF (Aug-NeRF), which for the first time brings the power of robust data augmentations into regularizing the NeRF training.
Our proposal learns to seamlessly blend worst-case perturbations into three distinct levels of the NeRF pipeline.
Aug-NeRF effectively boosts NeRF performance in both novel view synthesis and underlying geometry reconstruction.
arXiv Detail & Related papers (2022-07-04T02:27:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.