MoRF: Mobile Realistic Fullbody Avatars from a Monocular Video
- URL: http://arxiv.org/abs/2303.10275v2
- Date: Mon, 11 Dec 2023 17:00:36 GMT
- Title: MoRF: Mobile Realistic Fullbody Avatars from a Monocular Video
- Authors: Renat Bashirov, Alexey Larionov, Evgeniya Ustinova, Mikhail Sidorenko,
David Svitov, Ilya Zakharkin, Victor Lempitsky
- Abstract summary: We present a system to create Mobile Realistic Fullbody (MoRF) avatars.
MoRF avatars are rendered in real-time on mobile devices, learned from monocular videos, and have high realism.
- Score: 7.648034937040346
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present a system to create Mobile Realistic Fullbody (MoRF) avatars. MoRF
avatars are rendered in real-time on mobile devices, learned from monocular
videos, and have high realism. We use SMPL-X as a proxy geometry and render it
with DNR (neural texture and image-2-image network). We improve on prior work,
by overfitting per-frame warping fields in the neural texture space, allowing
to better align the training signal between different frames. We also refine
SMPL-X mesh fitting procedure to improve the overall avatar quality. In the
comparisons to other monocular video-based avatar systems, MoRF avatars achieve
higher image sharpness and temporal consistency. Participants of our user study
also preferred avatars generated by MoRF.
Related papers
- HAHA: Highly Articulated Gaussian Human Avatars with Textured Mesh Prior [24.094129395653134]
HAHA is a novel approach for animatable human avatar generation from monocular input videos.
We demonstrate its efficiency to animate and render full-body human avatars controlled via the SMPL-X parametric model.
arXiv Detail & Related papers (2024-04-01T11:23:38Z) - One2Avatar: Generative Implicit Head Avatar For Few-shot User Adaptation [31.310769289315648]
This paper introduces a novel approach to create high quality head avatar utilizing only a single or a few images per user.
We learn a generative model for 3D animatable photo-realistic head avatar from a multi-view dataset of expressions from 2407 subjects.
Our method demonstrates compelling results and outperforms existing state-of-the-art methods for few-shot avatar adaptation.
arXiv Detail & Related papers (2024-02-19T07:48:29Z) - 3DGS-Avatar: Animatable Avatars via Deformable 3D Gaussian Splatting [32.63571465495127]
We introduce an approach that creates animatable human avatars from monocular videos using 3D Gaussian Splatting (3DGS)
We learn a non-rigid network to reconstruct animatable clothed human avatars that can be trained within 30 minutes and rendered at real-time frame rates (50+ FPS)
Experimental results show that our method achieves comparable and even better performance compared to state-of-the-art approaches on animatable avatar creation from a monocular input.
arXiv Detail & Related papers (2023-12-14T18:54:32Z) - AvatarStudio: High-fidelity and Animatable 3D Avatar Creation from Text [71.09533176800707]
AvatarStudio is a coarse-to-fine generative model that generates explicit textured 3D meshes for animatable human avatars.
By effectively leveraging the synergy between the articulated mesh representation and the DensePose-conditional diffusion model, AvatarStudio can create high-quality avatars.
arXiv Detail & Related papers (2023-11-29T18:59:32Z) - DreamWaltz: Make a Scene with Complex 3D Animatable Avatars [68.49935994384047]
We present DreamWaltz, a novel framework for generating and animating complex 3D avatars given text guidance and parametric human body prior.
For animation, our method learns an animatable 3D avatar representation from abundant image priors of diffusion model conditioned on various poses.
arXiv Detail & Related papers (2023-05-21T17:59:39Z) - AvatarMAV: Fast 3D Head Avatar Reconstruction Using Motion-Aware Neural
Voxels [33.085274792188756]
We propose AvatarMAV, a fast 3D head avatar reconstruction method using Motion-Aware Neural Voxels.
AvatarMAV is the first to model both the canonical appearance and the decoupled expression motion by neural voxels for head avatar.
The proposed AvatarMAV can recover photo-realistic head avatars in just 5 minutes, which is significantly faster than the state-of-the-art facial reenactment methods.
arXiv Detail & Related papers (2022-11-23T18:49:31Z) - Capturing and Animation of Body and Clothing from Monocular Video [105.87228128022804]
We present SCARF, a hybrid model combining a mesh-based body with a neural radiance field.
integrating the mesh into the rendering enables us to optimize SCARF directly from monocular videos.
We demonstrate that SCARFs clothing with higher visual quality than existing methods, that the clothing deforms with changing body pose and body shape, and that clothing can be successfully transferred between avatars of different subjects.
arXiv Detail & Related papers (2022-10-04T19:34:05Z) - StylePeople: A Generative Model of Fullbody Human Avatars [59.42166744151461]
We propose a new type of full-body human avatars, which combines parametric mesh-based body model with a neural texture.
We show that such avatars can successfully model clothing and hair, which usually poses a problem for mesh-based approaches.
We then propose a generative model for such avatars that can be trained from datasets of images and videos of people.
arXiv Detail & Related papers (2021-04-16T20:43:11Z) - Pixel Codec Avatars [99.36561532588831]
Pixel Codec Avatars (PiCA) is a deep generative model of 3D human faces.
On a single Oculus Quest 2 mobile VR headset, 5 avatars are rendered in realtime in the same scene.
arXiv Detail & Related papers (2021-04-09T23:17:36Z) - Expressive Telepresence via Modular Codec Avatars [148.212743312768]
VR telepresence consists of interacting with another human in a virtual space represented by an avatar.
This paper aims in this direction and presents Modular Codec Avatars (MCA), a method to generate hyper-realistic faces driven by the cameras in the VR headset.
MCA extends traditional Codec Avatars (CA) by replacing the holistic models with a learned modular representation.
arXiv Detail & Related papers (2020-08-26T20:16:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.