R3-Avatar: Record and Retrieve Temporal Codebook for Reconstructing Photorealistic Human Avatars
- URL: http://arxiv.org/abs/2503.12751v1
- Date: Mon, 17 Mar 2025 02:38:13 GMT
- Title: R3-Avatar: Record and Retrieve Temporal Codebook for Reconstructing Photorealistic Human Avatars
- Authors: Yifan Zhan, Wangze Xu, Qingtian Zhu, Muyao Niu, Mingze Ma, Yifei Liu, Zhihang Zhong, Xiao Sun, Yinqiang Zheng,
- Abstract summary: R3-Avatar overcomes the inability of human avatars to be both animatable and of high-fidelity rendering quality.<n>We adopt a "record-retrieve-reconstruct" strategy that ensures high-fidelity rendering from novel views while mitigating degradation in novel poses.
- Score: 28.13595266126147
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present R3-Avatar, incorporating a temporal codebook, to overcome the inability of human avatars to be both animatable and of high-fidelity rendering quality. Existing video-based reconstruction of 3D human avatars either focuses solely on rendering, lacking animation support, or learns a pose-appearance mapping for animating, which degrades under limited training poses or complex clothing. In this paper, we adopt a "record-retrieve-reconstruct" strategy that ensures high-quality rendering from novel views while mitigating degradation in novel poses. Specifically, disambiguating timestamps record temporal appearance variations in a codebook, ensuring high-fidelity novel-view rendering, while novel poses retrieve corresponding timestamps by matching the most similar training poses for augmented appearance. Our R3-Avatar outperforms cutting-edge video-based human avatar reconstruction, particularly in overcoming visual quality degradation in extreme scenarios with limited training human poses and complex clothing.
Related papers
- NoPo-Avatar: Generalizable and Animatable Avatars from Sparse Inputs without Human Poses [94.67451270421323]
We tackle the task of recovering an animatable 3D human avatar from a single or a sparse set of images.<n>We show that pose-dependent reconstruction degrades results significantly if pose estimates are noisy.<n>We introduce NoPo-Avatar, which reconstructs avatars solely from images, without any pose input.
arXiv Detail & Related papers (2025-11-20T18:59:54Z) - AdaHuman: Animatable Detailed 3D Human Generation with Compositional Multiview Diffusion [56.12859795754579]
AdaHuman is a novel framework that generates high-fidelity animatable 3D avatars from a single in-the-wild image.<n>AdaHuman incorporates two key innovations: a pose-conditioned 3D joint diffusion model and a compositional 3DGS refinement module.
arXiv Detail & Related papers (2025-05-30T17:59:54Z) - FRESA: Feedforward Reconstruction of Personalized Skinned Avatars from Few Images [74.86864398919467]
We present a novel method for reconstructing personalized 3D human avatars with realistic animation from only a few images.
We learn a universal prior from over a thousand clothed humans to achieve instant feedforward generation and zero-shot generalization.
Our method generates more authentic reconstruction and animation than state-of-the-arts, and can be directly generalized to inputs from casually taken phone photos.
arXiv Detail & Related papers (2025-03-24T23:20:47Z) - Vid2Avatar-Pro: Authentic Avatar from Videos in the Wild via Universal Prior [31.780579293685797]
We present Vid2Avatar-Pro, a method to create photorealistic and animatable 3D human avatars from monocular in-the-wild videos.
arXiv Detail & Related papers (2025-03-03T14:45:35Z) - Human Gaussian Splatting: Real-time Rendering of Animatable Avatars [8.719797382786464]
This work addresses the problem of real-time rendering of photorealistic human body avatars learned from multi-view videos.
We propose an animatable human model based on 3D Gaussian Splatting, that has recently emerged as a very efficient alternative to neural radiance fields.
Our method achieves 1.5 dB PSNR improvement over the state-of-the-art on THuman4 dataset while being able to render in real-time (80 fps for 512x512 resolution)
arXiv Detail & Related papers (2023-11-28T12:05:41Z) - Animatable 3D Gaussian: Fast and High-Quality Reconstruction of Multiple Human Avatars [18.55354901614876]
We propose Animatable 3D Gaussian, which learns human avatars from input images and poses.
On both novel view synthesis and novel pose synthesis tasks, our method achieves higher reconstruction quality than InstantAvatar with less training time.
Our method can be easily extended to multi-human scenes and achieve comparable novel view synthesis results on a scene with ten people in only 25 seconds of training.
arXiv Detail & Related papers (2023-11-27T08:17:09Z) - DreamWaltz: Make a Scene with Complex 3D Animatable Avatars [68.49935994384047]
We present DreamWaltz, a novel framework for generating and animating complex 3D avatars given text guidance and parametric human body prior.
For animation, our method learns an animatable 3D avatar representation from abundant image priors of diffusion model conditioned on various poses.
arXiv Detail & Related papers (2023-05-21T17:59:39Z) - DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via
Diffusion Models [55.71306021041785]
We present DreamAvatar, a text-and-shape guided framework for generating high-quality 3D human avatars.
We leverage the SMPL model to provide shape and pose guidance for the generation.
We also jointly optimize the losses computed from the full body and from the zoomed-in 3D head to alleviate the common multi-face ''Janus'' problem.
arXiv Detail & Related papers (2023-04-03T12:11:51Z) - AvatarGen: a 3D Generative Model for Animatable Human Avatars [108.11137221845352]
AvatarGen is the first method that enables not only non-rigid human generation with diverse appearance but also full control over poses and viewpoints.
To model non-rigid dynamics, it introduces a deformation network to learn pose-dependent deformations in the canonical space.
Our method can generate animatable human avatars with high-quality appearance and geometry modeling, significantly outperforming previous 3D GANs.
arXiv Detail & Related papers (2022-08-01T01:27:02Z) - MVP-Human Dataset for 3D Human Avatar Reconstruction from Unconstrained
Frames [59.37430649840777]
We present 3D Avatar Reconstruction in the wild (ARwild), which first reconstructs the implicit skinning fields in a multi-level manner.
We contribute a large-scale dataset, MVP-Human, which contains 400 subjects, each of which has 15 scans in different poses.
Overall, benefits from the specific network architecture and the diverse data, the trained model enables 3D avatar reconstruction from unconstrained frames.
arXiv Detail & Related papers (2022-04-24T03:57:59Z) - ARCH++: Animation-Ready Clothed Human Reconstruction Revisited [82.83445332309238]
We present ARCH++, an image-based method to reconstruct 3D avatars with arbitrary clothing styles.
Our reconstructed avatars are animation-ready and highly realistic, in both the visible regions from input views and the unseen regions.
arXiv Detail & Related papers (2021-08-17T19:27:12Z) - ARCH: Animatable Reconstruction of Clothed Humans [27.849315613277724]
ARCH (Animatable Reconstruction of Clothed Humans) is an end-to-end framework for accurate reconstruction of animation-ready 3D clothed humans from a monocular image.
ARCH is a learned pose-aware model that produces detailed 3D rigged full-body human avatars from a single unconstrained RGB image.
arXiv Detail & Related papers (2020-04-08T14:23:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.