StylePeople: A Generative Model of Fullbody Human Avatars
- URL: http://arxiv.org/abs/2104.08363v1
- Date: Fri, 16 Apr 2021 20:43:11 GMT
- Title: StylePeople: A Generative Model of Fullbody Human Avatars
- Authors: Artur Grigorev, Karim Iskakov, Anastasia Ianina, Renat Bashirov, Ilya
Zakharkin, Alexander Vakhitov, Victor Lempitsky
- Abstract summary: We propose a new type of full-body human avatars, which combines parametric mesh-based body model with a neural texture.
We show that such avatars can successfully model clothing and hair, which usually poses a problem for mesh-based approaches.
We then propose a generative model for such avatars that can be trained from datasets of images and videos of people.
- Score: 59.42166744151461
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose a new type of full-body human avatars, which combines parametric
mesh-based body model with a neural texture. We show that with the help of
neural textures, such avatars can successfully model clothing and hair, which
usually poses a problem for mesh-based approaches. We also show how these
avatars can be created from multiple frames of a video using backpropagation.
We then propose a generative model for such avatars that can be trained from
datasets of images and videos of people. The generative model allows us to
sample random avatars as well as to create dressed avatars of people from one
or few images. The code for the project is available at
saic-violet.github.io/style-people.
Related papers
- GenCA: A Text-conditioned Generative Model for Realistic and Drivable Codec Avatars [44.8290935585746]
Photo-realistic and controllable 3D avatars are crucial for various applications such as virtual and mixed reality (VR/MR), telepresence, gaming, and film production.
Traditional methods for avatar creation often involve time-consuming scanning and reconstruction processes for each avatar.
We propose a text-conditioned generative model that can generate photo-realistic facial avatars of diverse identities.
arXiv Detail & Related papers (2024-08-24T21:25:22Z) - WildAvatar: Web-scale In-the-wild Video Dataset for 3D Avatar Creation [55.85887047136534]
WildAvatar is a web-scale in-the-wild human avatar creation dataset extracted from YouTube.
We evaluate several state-of-the-art avatar creation methods on our dataset, highlighting the unexplored challenges in real-world applications on avatar creation.
arXiv Detail & Related papers (2024-07-02T11:17:48Z) - AvatarStudio: High-fidelity and Animatable 3D Avatar Creation from Text [71.09533176800707]
AvatarStudio is a coarse-to-fine generative model that generates explicit textured 3D meshes for animatable human avatars.
By effectively leveraging the synergy between the articulated mesh representation and the DensePose-conditional diffusion model, AvatarStudio can create high-quality avatars.
arXiv Detail & Related papers (2023-11-29T18:59:32Z) - Animatable and Relightable Gaussians for High-fidelity Human Avatar Modeling [47.1427140235414]
We introduce a new avatar representation that leverages powerful 2D CNNs and 3D Gaussian splatting to create high-fidelity avatars.
Our method can create lifelike avatars with dynamic, realistic, generalized and relightable appearances.
arXiv Detail & Related papers (2023-11-27T18:59:04Z) - Tag-Based Annotation for Avatar Face Creation [2.498487539723264]
We train a model to produce avatars from human images using tag-based annotations.
Our contribution is an application of tag-based annotation to train a model for avatar face creation.
arXiv Detail & Related papers (2023-08-24T08:35:12Z) - AvatarFusion: Zero-shot Generation of Clothing-Decoupled 3D Avatars
Using 2D Diffusion [34.609403685504944]
We present AvatarFusion, a framework for zero-shot text-to-avatar generation.
We use a latent diffusion model to provide pixel-level guidance for generating human-realistic avatars.
We also introduce a novel optimization method, called Pixel-Semantics Difference-Sampling (PS-DS), which semantically separates the generation of body and clothes.
arXiv Detail & Related papers (2023-07-13T02:19:56Z) - AvatarCraft: Transforming Text into Neural Human Avatars with
Parameterized Shape and Pose Control [38.959851274747145]
AvatarCraft is a method for creating a 3D human avatar with a specific identity and artistic style that can be easily animated.
We use diffusion models to guide the learning of geometry and texture for a neural avatar based on a single text prompt.
We make the human avatar animatable by deforming the neural implicit field with an explicit warping field.
arXiv Detail & Related papers (2023-03-30T17:59:59Z) - Capturing and Animation of Body and Clothing from Monocular Video [105.87228128022804]
We present SCARF, a hybrid model combining a mesh-based body with a neural radiance field.
integrating the mesh into the rendering enables us to optimize SCARF directly from monocular videos.
We demonstrate that SCARFs clothing with higher visual quality than existing methods, that the clothing deforms with changing body pose and body shape, and that clothing can be successfully transferred between avatars of different subjects.
arXiv Detail & Related papers (2022-10-04T19:34:05Z) - AvatarGen: a 3D Generative Model for Animatable Human Avatars [108.11137221845352]
AvatarGen is the first method that enables not only non-rigid human generation with diverse appearance but also full control over poses and viewpoints.
To model non-rigid dynamics, it introduces a deformation network to learn pose-dependent deformations in the canonical space.
Our method can generate animatable human avatars with high-quality appearance and geometry modeling, significantly outperforming previous 3D GANs.
arXiv Detail & Related papers (2022-08-01T01:27:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.