GETAvatar: Generative Textured Meshes for Animatable Human Avatars
- URL: http://arxiv.org/abs/2310.02714v1
- Date: Wed, 4 Oct 2023 10:30:24 GMT
- Title: GETAvatar: Generative Textured Meshes for Animatable Human Avatars
- Authors: Xuanmeng Zhang, Jianfeng Zhang, Rohan Chacko, Hongyi Xu, Guoxian Song,
Yi Yang, Jiashi Feng
- Abstract summary: We study the problem of 3D-aware full-body human generation, aiming at creating animatable human avatars with high-quality geometries and textures.
We propose GETAvatar, a Generative model that directly generates Explicit Textured 3D rendering for animatable human Avatar.
- Score: 69.56959932421057
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of 3D-aware full-body human generation, aiming at
creating animatable human avatars with high-quality textures and geometries.
Generally, two challenges remain in this field: i) existing methods struggle to
generate geometries with rich realistic details such as the wrinkles of
garments; ii) they typically utilize volumetric radiance fields and neural
renderers in the synthesis process, making high-resolution rendering
non-trivial. To overcome these problems, we propose GETAvatar, a Generative
model that directly generates Explicit Textured 3D meshes for animatable human
Avatar, with photo-realistic appearance and fine geometric details.
Specifically, we first design an articulated 3D human representation with
explicit surface modeling, and enrich the generated humans with realistic
surface details by learning from the 2D normal maps of 3D scan data. Second,
with the explicit mesh representation, we can use a rasterization-based
renderer to perform surface rendering, allowing us to achieve high-resolution
image generation efficiently. Extensive experiments demonstrate that GETAvatar
achieves state-of-the-art performance on 3D-aware human generation both in
appearance and geometry quality. Notably, GETAvatar can generate images at
512x512 resolution with 17FPS and 1024x1024 resolution with 14FPS, improving
upon previous methods by 2x. Our code and models will be available.
Related papers
- UV Gaussians: Joint Learning of Mesh Deformation and Gaussian Textures for Human Avatar Modeling [71.87807614875497]
We propose UV Gaussians, which models the 3D human body by jointly learning mesh deformations and 2D UV-space Gaussian textures.
We collect and process a new dataset of human motion, which includes multi-view images, scanned models, parametric model registration, and corresponding texture maps. Experimental results demonstrate that our method achieves state-of-the-art synthesis of novel view and novel pose.
arXiv Detail & Related papers (2024-03-18T09:03:56Z) - En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D
Synthetic Data [36.51674664590734]
We present En3D, an enhanced izable scheme for high-qualityd 3D human avatars.
Unlike previous works that rely on scarce 3D datasets or limited 2D collections with imbalance viewing angles and pose priors, our approach aims to develop a zero-shot 3D capable of producing 3D humans.
arXiv Detail & Related papers (2024-01-02T12:06:31Z) - HumanGen: Generating Human Radiance Fields with Explicit Priors [19.5166920467636]
HumanGen is a novel 3D human generation scheme with detailed geometry and realistic free-view rendering.
It explicitly marries the 3D human generation with various priors from the 2D generator and 3D reconstructor of humans through the design of "anchor image"
arXiv Detail & Related papers (2022-12-10T15:27:48Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z) - 3D-Aware Semantic-Guided Generative Model for Human Synthesis [67.86621343494998]
This paper proposes a 3D-aware Semantic-Guided Generative Model (3D-SGAN) for human image synthesis.
Our experiments on the DeepFashion dataset show that 3D-SGAN significantly outperforms the most recent baselines.
arXiv Detail & Related papers (2021-12-02T17:10:53Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z) - AvatarMe: Realistically Renderable 3D Facial Reconstruction
"in-the-wild" [105.28776215113352]
AvatarMe is the first method that is able to reconstruct photorealistic 3D faces from a single "in-the-wild" image with an increasing level of detail.
It outperforms the existing arts by a significant margin and reconstructs authentic, 4K by 6K-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2020-03-30T22:17:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.