Semantic-Preserved Point-based Human Avatar
- URL: http://arxiv.org/abs/2311.11614v1
- Date: Mon, 20 Nov 2023 08:56:51 GMT
- Title: Semantic-Preserved Point-based Human Avatar
- Authors: Lixiang Lin, Jianke Zhu
- Abstract summary: We present the first point-based human avatar model that embodies the entirety of digital humans.
We propose a novel method to transfer semantic information from the SMPL-X model to the points, which enables to better understand human body movements.
- Score: 15.017308063001366
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To enable realistic experience in AR/VR and digital entertainment, we present
the first point-based human avatar model that embodies the entirety expressive
range of digital humans. We employ two MLPs to model pose-dependent deformation
and linear skinning (LBS) weights. The representation of appearance relies on a
decoder and the features that attached to each point. In contrast to
alternative implicit approaches, the oriented points representation not only
provides a more intuitive way to model human avatar animation but also
significantly reduces both training and inference time. Moreover, we propose a
novel method to transfer semantic information from the SMPL-X model to the
points, which enables to better understand human body movements. By leveraging
the semantic information of points, we can facilitate virtual try-on and human
avatar composition through exchanging the points of same category across
different subjects. Experimental results demonstrate the efficacy of our
presented method.
Related papers
- AvatarGO: Zero-shot 4D Human-Object Interaction Generation and Animation [60.5897687447003]
AvatarGO is a novel framework designed to generate realistic 4D HOI scenes from textual inputs.
Our framework not only generates coherent compositional motions, but also exhibits greater robustness in handling issues.
As the first attempt to synthesize 4D avatars with object interactions, we hope AvatarGO could open new doors for human-centric 4D content creation.
arXiv Detail & Related papers (2024-10-09T17:58:56Z) - From Audio to Photoreal Embodiment: Synthesizing Humans in Conversations [107.88375243135579]
Given speech audio, we output multiple possibilities of gestural motion for an individual, including face, body, and hands.
We visualize the generated motion using highly photorealistic avatars that can express crucial nuances in gestures.
Experiments show our model generates appropriate and diverse gestures, outperforming both diffusion- and VQ-only methods.
arXiv Detail & Related papers (2024-01-03T18:55:16Z) - Deformable 3D Gaussian Splatting for Animatable Human Avatars [50.61374254699761]
We propose a fully explicit approach to construct a digital avatar from as little as a single monocular sequence.
ParDy-Human constitutes an explicit model for realistic dynamic human avatars which requires significantly fewer training views and images.
Our avatars learning is free of additional annotations such as Splat masks and can be trained with variable backgrounds while inferring full-resolution images efficiently even on consumer hardware.
arXiv Detail & Related papers (2023-12-22T20:56:46Z) - GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians [51.46168990249278]
We present an efficient approach to creating realistic human avatars with dynamic 3D appearances from a single video.
GustafAvatar is validated on both the public dataset and our collected dataset.
arXiv Detail & Related papers (2023-12-04T18:55:45Z) - Physics-based Motion Retargeting from Sparse Inputs [73.94570049637717]
Commercial AR/VR products consist only of a headset and controllers, providing very limited sensor data of the user's pose.
We introduce a method to retarget motions in real-time from sparse human sensor data to characters of various morphologies.
We show that the avatar poses often match the user surprisingly well, despite having no sensor information of the lower body available.
arXiv Detail & Related papers (2023-07-04T21:57:05Z) - X-Avatar: Expressive Human Avatars [33.24502928725897]
We present X-Avatar, a novel avatar model that captures the full expressiveness of digital humans to bring about life-like experiences in telepresence, AR/VR and beyond.
Our method models bodies, hands, facial expressions and appearance in a holistic fashion and can be learned from either full 3D scans or RGB-D data.
arXiv Detail & Related papers (2023-03-08T18:59:39Z) - S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling [103.65625425020129]
We represent the pedestrian's shape, pose and skinning weights as neural implicit functions that are directly learned from data.
We demonstrate the effectiveness of our approach on various datasets and show that our reconstructions outperform existing state-of-the-art methods.
arXiv Detail & Related papers (2021-01-17T02:16:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.