AvatarFusion: Zero-shot Generation of Clothing-Decoupled 3D Avatars
Using 2D Diffusion
- URL: http://arxiv.org/abs/2307.06526v2
- Date: Thu, 14 Sep 2023 09:23:18 GMT
- Title: AvatarFusion: Zero-shot Generation of Clothing-Decoupled 3D Avatars
Using 2D Diffusion
- Authors: Shuo Huang, Zongxin Yang, Liangting Li, Yi Yang, Jia Jia
- Abstract summary: We present AvatarFusion, a framework for zero-shot text-to-avatar generation.
We use a latent diffusion model to provide pixel-level guidance for generating human-realistic avatars.
We also introduce a novel optimization method, called Pixel-Semantics Difference-Sampling (PS-DS), which semantically separates the generation of body and clothes.
- Score: 34.609403685504944
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large-scale pre-trained vision-language models allow for the zero-shot
text-based generation of 3D avatars. The previous state-of-the-art method
utilized CLIP to supervise neural implicit models that reconstructed a human
body mesh. However, this approach has two limitations. Firstly, the lack of
avatar-specific models can cause facial distortion and unrealistic clothing in
the generated avatars. Secondly, CLIP only provides optimization direction for
the overall appearance, resulting in less impressive results. To address these
limitations, we propose AvatarFusion, the first framework to use a latent
diffusion model to provide pixel-level guidance for generating human-realistic
avatars while simultaneously segmenting clothing from the avatar's body.
AvatarFusion includes the first clothing-decoupled neural implicit avatar model
that employs a novel Dual Volume Rendering strategy to render the decoupled
skin and clothing sub-models in one space. We also introduce a novel
optimization method, called Pixel-Semantics Difference-Sampling (PS-DS), which
semantically separates the generation of body and clothes, and generates a
variety of clothing styles. Moreover, we establish the first benchmark for
zero-shot text-to-avatar generation. Our experimental results demonstrate that
our framework outperforms previous approaches, with significant improvements
observed in all metrics. Additionally, since our model is clothing-decoupled,
we can exchange the clothes of avatars. Code are available on our project page
https://hansenhuang0823.github.io/AvatarFusion.
Related papers
- Animatable and Relightable Gaussians for High-fidelity Human Avatar Modeling [47.1427140235414]
We introduce a new avatar representation that leverages powerful 2D CNNs and 3D Gaussian splatting to create high-fidelity avatars.
Our method can create lifelike avatars with dynamic, realistic, generalized and relightable appearances.
arXiv Detail & Related papers (2023-11-27T18:59:04Z) - Learning Disentangled Avatars with Hybrid 3D Representations [102.9632315060652]
We present Disentangled Avatars(DELTA) which models humans with hybrid explicit-implicit 3D representations.
We consider the disentanglement of the human body and clothing and in the second, we disentangle the face and hair.
We show how these two applications can be easily combined to model full-body avatars.
arXiv Detail & Related papers (2023-09-12T17:59:36Z) - AvatarBooth: High-Quality and Customizable 3D Human Avatar Generation [14.062402203105712]
AvatarBooth is a novel method for generating high-quality 3D avatars using text prompts or specific images.
Our key contribution is the precise avatar generation control by using dual fine-tuned diffusion models.
We present a multi-resolution rendering strategy that facilitates coarse-to-fine supervision of 3D avatar generation.
arXiv Detail & Related papers (2023-06-16T14:18:51Z) - DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via
Diffusion Models [55.71306021041785]
We present DreamAvatar, a text-and-shape guided framework for generating high-quality 3D human avatars.
We leverage the SMPL model to provide shape and pose guidance for the generation.
We also jointly optimize the losses computed from the full body and from the zoomed-in 3D head to alleviate the common multi-face ''Janus'' problem.
arXiv Detail & Related papers (2023-04-03T12:11:51Z) - Capturing and Animation of Body and Clothing from Monocular Video [105.87228128022804]
We present SCARF, a hybrid model combining a mesh-based body with a neural radiance field.
integrating the mesh into the rendering enables us to optimize SCARF directly from monocular videos.
We demonstrate that SCARFs clothing with higher visual quality than existing methods, that the clothing deforms with changing body pose and body shape, and that clothing can be successfully transferred between avatars of different subjects.
arXiv Detail & Related papers (2022-10-04T19:34:05Z) - AvatarGen: a 3D Generative Model for Animatable Human Avatars [108.11137221845352]
AvatarGen is the first method that enables not only non-rigid human generation with diverse appearance but also full control over poses and viewpoints.
To model non-rigid dynamics, it introduces a deformation network to learn pose-dependent deformations in the canonical space.
Our method can generate animatable human avatars with high-quality appearance and geometry modeling, significantly outperforming previous 3D GANs.
arXiv Detail & Related papers (2022-08-01T01:27:02Z) - ICON: Implicit Clothed humans Obtained from Normals [49.5397825300977]
Implicit functions are well suited to the first task, as they can capture details like hair or clothes.
ICON infers detailed clothed-human normals conditioned on the SMPL(-X) normals.
ICON takes a step towards robust 3D clothed human reconstruction from in-the-wild images.
arXiv Detail & Related papers (2021-12-16T18:59:41Z) - Explicit Clothing Modeling for an Animatable Full-Body Avatar [21.451440299450592]
We build an animatable clothed body avatar with an explicit representation of the clothing on the upper body from multi-view captured videos.
To learn the interaction between the body dynamics and clothing states, we use a temporal convolution network to predict the clothing latent code.
We show photorealistic animation output for three different actors, and demonstrate the advantage of our clothed-body avatars over single-layer avatars.
arXiv Detail & Related papers (2021-06-28T17:58:40Z) - StylePeople: A Generative Model of Fullbody Human Avatars [59.42166744151461]
We propose a new type of full-body human avatars, which combines parametric mesh-based body model with a neural texture.
We show that such avatars can successfully model clothing and hair, which usually poses a problem for mesh-based approaches.
We then propose a generative model for such avatars that can be trained from datasets of images and videos of people.
arXiv Detail & Related papers (2021-04-16T20:43:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.