AttriHuman-3D: Editable 3D Human Avatar Generation with Attribute
Decomposition and Indexing
- URL: http://arxiv.org/abs/2312.02209v3
- Date: Tue, 27 Feb 2024 02:47:55 GMT
- Title: AttriHuman-3D: Editable 3D Human Avatar Generation with Attribute
Decomposition and Indexing
- Authors: Fan Yang, Tianyi Chen, Xiaosheng He, Zhongang Cai, Lei Yang, Si Wu,
Guosheng Lin
- Abstract summary: We propose AttriHuman-3D, an editable 3D human generation model.
It generates all attributes in an overall attribute space with six feature planes, which are decomposed and manipulated with different attribute indexes.
Our model provides a strong disentanglement between different attributes, allows fine-grained image editing and generates high-quality 3D human avatars.
- Score: 79.38471599977011
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Editable 3D-aware generation, which supports user-interacted editing, has
witnessed rapid development recently. However, existing editable 3D GANs either
fail to achieve high-accuracy local editing or suffer from huge computational
costs. We propose AttriHuman-3D, an editable 3D human generation model, which
address the aforementioned problems with attribute decomposition and indexing.
The core idea of the proposed model is to generate all attributes (e.g. human
body, hair, clothes and so on) in an overall attribute space with six feature
planes, which are then decomposed and manipulated with different attribute
indexes. To precisely extract features of different attributes from the
generated feature planes, we propose a novel attribute indexing method as well
as an orthogonal projection regularization to enhance the disentanglement. We
also introduce a hyper-latent training strategy and an attribute-specific
sampling strategy to avoid style entanglement and misleading punishment from
the discriminator. Our method allows users to interactively edit selected
attributes in the generated 3D human avatars while keeping others fixed. Both
qualitative and quantitative experiments demonstrate that our model provides a
strong disentanglement between different attributes, allows fine-grained image
editing and generates high-quality 3D human avatars.
Related papers
- Arc2Avatar: Generating Expressive 3D Avatars from a Single Image via ID Guidance [69.9745497000557]
We introduce Arc2Avatar, the first SDS-based method utilizing a human face foundation model as guidance with just a single image as input.
Our avatars maintain a dense correspondence with a human face mesh template, allowing blendshape-based expression generation.
arXiv Detail & Related papers (2025-01-09T17:04:33Z) - PERSE: Personalized 3D Generative Avatars from A Single Portrait [7.890834685325639]
PERSE is a method for building an animatable personalized generative avatar from a reference portrait.
Our method begins by synthesizing large-scale synthetic 2D video datasets.
We propose a novel pipeline to produce high-quality, photorealistic 2D videos with facial attribute editing.
arXiv Detail & Related papers (2024-12-30T18:59:58Z) - Generating Editable Head Avatars with 3D Gaussian GANs [57.51487984425395]
Traditional 3D-aware generative adversarial networks (GANs) achieve photorealistic and view-consistent 3D head synthesis.
We propose a novel approach that enhances the editability and animation control of 3D head avatars by incorporating 3D Gaussian Splatting (3DGS) as an explicit 3D representation.
Our approach delivers high-quality 3D-aware synthesis with state-of-the-art controllability.
arXiv Detail & Related papers (2024-12-26T10:10:03Z) - Efficient 3D-Aware Facial Image Editing via Attribute-Specific Prompt Learning [40.6806832534633]
We propose an efficient, plug-and-play, 3D-aware face editing framework based on attribute-specific prompt learning.
Our proposed framework generates high-quality images with 3D awareness and view consistency while maintaining attribute-specific features.
arXiv Detail & Related papers (2024-06-06T18:01:30Z) - GeneAvatar: Generic Expression-Aware Volumetric Head Avatar Editing from a Single Image [89.70322127648349]
We propose a generic avatar editing approach that can be universally applied to various 3DMM driving volumetric head avatars.
To achieve this goal, we design a novel expression-aware modification generative model, which enables lift 2D editing from a single image to a consistent 3D modification field.
arXiv Detail & Related papers (2024-04-02T17:58:35Z) - Exploring Attribute Variations in Style-based GANs using Diffusion
Models [48.98081892627042]
We formulate the task of textitdiverse attribute editing by modeling the multidimensional nature of attribute edits.
We capitalize on disentangled latent spaces of pretrained GANs and train a Denoising Diffusion Probabilistic Model (DDPM) to learn the latent distribution for diverse edits.
arXiv Detail & Related papers (2023-11-27T18:14:03Z) - Learning Locally Editable Virtual Humans [37.95173373011365]
We propose a novel hybrid representation and end-to-end trainable network architecture to model fully editable neural avatars.
At the core of our work lies a representation that combines the modeling power of neural fields with the ease of use and inherent 3D consistency of skinned meshes.
Our method generates diverse detailed avatars and achieves better model fitting performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-04-28T23:06:17Z) - DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via
Diffusion Models [55.71306021041785]
We present DreamAvatar, a text-and-shape guided framework for generating high-quality 3D human avatars.
We leverage the SMPL model to provide shape and pose guidance for the generation.
We also jointly optimize the losses computed from the full body and from the zoomed-in 3D head to alleviate the common multi-face ''Janus'' problem.
arXiv Detail & Related papers (2023-04-03T12:11:51Z) - Enhanced 3DMM Attribute Control via Synthetic Dataset Creation Pipeline [2.4309139330334846]
We develop a novel pipeline for generating paired 3D faces by harnessing the power of GANs.
We then propose an enhanced non-linear 3D conditional attribute controller that increases the precision and diversity of 3D attribute control.
arXiv Detail & Related papers (2020-11-25T15:43:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.