GHNeRF: Learning Generalizable Human Features with Efficient Neural Radiance Fields
- URL: http://arxiv.org/abs/2404.06246v1
- Date: Tue, 9 Apr 2024 12:11:25 GMT
- Title: GHNeRF: Learning Generalizable Human Features with Efficient Neural Radiance Fields
- Authors: Arnab Dey, Di Yang, Rohith Agaram, Antitza Dantcheva, Andrew I. Comport, Srinath Sridhar, Jean Martinet,
- Abstract summary: We introduce GHNeRF, designed to address limitations by learning 2D/3D joint locations of human subjects with NeRF representation.
GHNeRF uses a pre-trained 2D encoder streamlined to extract essential human features from 2D images, which are then incorporated into the NeRF framework.
Our results show that GHNeRF can achieve state-of-the-art results in near real-time.
- Score: 12.958200963257381
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent advances in Neural Radiance Fields (NeRF) have demonstrated promising results in 3D scene representations, including 3D human representations. However, these representations often lack crucial information on the underlying human pose and structure, which is crucial for AR/VR applications and games. In this paper, we introduce a novel approach, termed GHNeRF, designed to address these limitations by learning 2D/3D joint locations of human subjects with NeRF representation. GHNeRF uses a pre-trained 2D encoder streamlined to extract essential human features from 2D images, which are then incorporated into the NeRF framework in order to encode human biomechanic features. This allows our network to simultaneously learn biomechanic features, such as joint locations, along with human geometry and texture. To assess the effectiveness of our method, we conduct a comprehensive comparison with state-of-the-art human NeRF techniques and joint estimation algorithms. Our results show that GHNeRF can achieve state-of-the-art results in near real-time.
Related papers
- HFNeRF: Learning Human Biomechanic Features with Neural Radiance Fields [11.961164199224351]
We introduce HFNeRF: a novel generalizable human feature NeRF aimed at generating human biomechanic features.
We leverage 2D pre-trained foundation models toward learning human features in 3D using neural rendering.
We evaluate HFNeRF in the skeleton estimation task by predicting heatmaps as features.
arXiv Detail & Related papers (2024-04-09T09:23:04Z) - 3D Visibility-aware Generalizable Neural Radiance Fields for Interacting
Hands [51.305421495638434]
Neural radiance fields (NeRFs) are promising 3D representations for scenes, objects, and humans.
This paper proposes a generalizable visibility-aware NeRF framework for interacting hands.
Experiments on the Interhand2.6M dataset demonstrate that our proposed VA-NeRF outperforms conventional NeRFs significantly.
arXiv Detail & Related papers (2024-01-02T00:42:06Z) - Deceptive-Human: Prompt-to-NeRF 3D Human Generation with 3D-Consistent
Synthetic Images [67.31920821192323]
Deceptive-Human is a novel framework capitalizing state-of-the-art control diffusion models (e.g., ControlNet) to generate a high-quality controllable 3D human NeRF.
Our method is versatile and readily accommodating, including a text prompt and additional data such as 3D mesh, poses, and seed images.
The resulting 3D human NeRF model empowers the synthesis of highly photorealistic views from 360-degree perspectives.
arXiv Detail & Related papers (2023-11-27T15:49:41Z) - BeyondPixels: A Comprehensive Review of the Evolution of Neural Radiance Fields [1.1531932979578041]
NeRF, short for Neural Radiance Fields, is a recent innovation that uses AI algorithms to create 3D objects from 2D images.
This survey reviews recent advances in NeRF and categorizes them according to their architectural designs.
arXiv Detail & Related papers (2023-06-05T16:10:21Z) - SHERF: Generalizable Human NeRF from a Single Image [59.10589479808622]
SHERF is the first generalizable Human NeRF model for recovering animatable 3D humans from a single input image.
We propose a bank of 3D-aware hierarchical features, including global, point-level, and pixel-aligned features, to facilitate informative encoding.
arXiv Detail & Related papers (2023-03-22T17:59:12Z) - FeatureNeRF: Learning Generalizable NeRFs by Distilling Foundation
Models [21.523836478458524]
Recent works on generalizable NeRFs have shown promising results on novel view synthesis from single or few images.
We propose a novel framework named FeatureNeRF to learn generalizable NeRFs by distilling pre-trained vision models.
Our experiments demonstrate the effectiveness of FeatureNeRF as a generalizable 3D semantic feature extractor.
arXiv Detail & Related papers (2023-03-22T17:57:01Z) - Semantic 3D-aware Portrait Synthesis and Manipulation Based on
Compositional Neural Radiance Field [55.431697263581626]
We propose a Compositional Neural Radiance Field (CNeRF) for semantic 3D-aware portrait synthesis and manipulation.
CNeRF divides the image by semantic regions and learns an independent neural radiance field for each region, and finally fuses them and renders the complete image.
Compared to state-of-the-art 3D-aware GAN methods, our approach enables fine-grained semantic region manipulation, while maintaining high-quality 3D-consistent synthesis.
arXiv Detail & Related papers (2023-02-03T07:17:46Z) - HyperNeRFGAN: Hypernetwork approach to 3D NeRF GAN [3.479254848034425]
We propose a generative model called HyperNeRFGAN, which uses hypernetworks paradigm to produce 3D objects represented by NeRF.
Our architecture produces 2D images, but we use 3D-aware NeRF representation, which forces the model to produce correct 3D objects.
arXiv Detail & Related papers (2023-01-27T10:21:18Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z) - 3D-Aware Semantic-Guided Generative Model for Human Synthesis [67.86621343494998]
This paper proposes a 3D-aware Semantic-Guided Generative Model (3D-SGAN) for human image synthesis.
Our experiments on the DeepFashion dataset show that 3D-SGAN significantly outperforms the most recent baselines.
arXiv Detail & Related papers (2021-12-02T17:10:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.