TexVocab: Texture Vocabulary-conditioned Human Avatars
- URL: http://arxiv.org/abs/2404.00524v1
- Date: Sun, 31 Mar 2024 01:58:04 GMT
- Title: TexVocab: Texture Vocabulary-conditioned Human Avatars
- Authors: Yuxiao Liu, Zhe Li, Yebin Liu, Haoqian Wang,
- Abstract summary: TexVocab is a novel avatar representation that constructs a texture vocabulary and associates body poses with texture maps for animation.
Our method is able to create animatable human avatars with detailed and dynamic appearances from RGB videos.
- Score: 42.170169762733835
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To adequately utilize the available image evidence in multi-view video-based avatar modeling, we propose TexVocab, a novel avatar representation that constructs a texture vocabulary and associates body poses with texture maps for animation. Given multi-view RGB videos, our method initially back-projects all the available images in the training videos to the posed SMPL surface, producing texture maps in the SMPL UV domain. Then we construct pairs of human poses and texture maps to establish a texture vocabulary for encoding dynamic human appearances under various poses. Unlike the commonly used joint-wise manner, we further design a body-part-wise encoding strategy to learn the structural effects of the kinematic chain. Given a driving pose, we query the pose feature hierarchically by decomposing the pose vector into several body parts and interpolating the texture features for synthesizing fine-grained human dynamics. Overall, our method is able to create animatable human avatars with detailed and dynamic appearances from RGB videos, and the experiments show that our method outperforms state-of-the-art approaches. The project page can be found at https://texvocab.github.io/.
Related papers
- Democratizing the Creation of Animatable Facial Avatars [2.1740466069378597]
We propose a novel pipeline for obtaining geometry and texture without using a light stage or any other high-end hardware.
A key novel idea consists of warping real-world images to align with the geometry of a template avatar.
Not only can our method be used to obtain a neutral expression geometry and de-lit texture, but it can also be used to improve avatars after they have been imported into an animation system.
arXiv Detail & Related papers (2024-01-29T20:14:40Z) - Synthesizing Moving People with 3D Control [88.68284137105654]
We present a diffusion model-based framework for animating people from a single image for a given target 3D motion sequence.
For the first part, we learn an in-filling diffusion model to hallucinate unseen parts of a person given a single image.
Second, we develop a diffusion-based rendering pipeline, which is controlled by 3D human poses.
arXiv Detail & Related papers (2024-01-19T18:59:11Z) - Novel View Synthesis of Humans using Differentiable Rendering [50.57718384229912]
We present a new approach for synthesizing novel views of people in new poses.
Our synthesis makes use of diffuse Gaussian primitives that represent the underlying skeletal structure of a human.
Rendering these primitives gives results in a high-dimensional latent image, which is then transformed into an RGB image by a decoder network.
arXiv Detail & Related papers (2023-03-28T10:48:33Z) - Capturing and Animation of Body and Clothing from Monocular Video [105.87228128022804]
We present SCARF, a hybrid model combining a mesh-based body with a neural radiance field.
integrating the mesh into the rendering enables us to optimize SCARF directly from monocular videos.
We demonstrate that SCARFs clothing with higher visual quality than existing methods, that the clothing deforms with changing body pose and body shape, and that clothing can be successfully transferred between avatars of different subjects.
arXiv Detail & Related papers (2022-10-04T19:34:05Z) - Human Pose Manipulation and Novel View Synthesis using Differentiable
Rendering [46.04980667824064]
We present a new approach for synthesizing novel views of people in new poses.
Our synthesis makes use of diffuse Gaussian primitives that represent the underlying skeletal structure of a human.
Rendering these primitives gives results in a high-dimensional latent image, which is then transformed into an RGB image by a decoder network.
arXiv Detail & Related papers (2021-11-24T19:00:07Z) - EgoRenderer: Rendering Human Avatars from Egocentric Camera Images [87.96474006263692]
We present EgoRenderer, a system for rendering full-body neural avatars of a person captured by a wearable, egocentric fisheye camera.
Rendering full-body avatars from such egocentric images come with unique challenges due to the top-down view and large distortions.
We tackle these challenges by decomposing the rendering process into several steps, including texture synthesis, pose construction, and neural image translation.
arXiv Detail & Related papers (2021-11-24T18:33:02Z) - Creating and Reenacting Controllable 3D Humans with Differentiable
Rendering [3.079885946230076]
This paper proposes a new end-to-end neural rendering architecture to transfer appearance and reenact human actors.
Our method leverages a carefully designed graph convolutional network (GCN) to model the human body manifold structure.
By taking advantages of both different synthesisiable rendering and the 3D parametric model, our method is fully controllable.
arXiv Detail & Related papers (2021-10-22T12:40:09Z) - Neural Re-Rendering of Humans from a Single Image [80.53438609047896]
We propose a new method for neural re-rendering of a human under a novel user-defined pose and viewpoint.
Our algorithm represents body pose and shape as a parametric mesh which can be reconstructed from a single image.
arXiv Detail & Related papers (2021-01-11T18:53:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.