XHand: Real-time Expressive Hand Avatar
- URL: http://arxiv.org/abs/2407.21002v1
- Date: Tue, 30 Jul 2024 17:49:21 GMT
- Title: XHand: Real-time Expressive Hand Avatar
- Authors: Qijun Gan, Zijie Zhou, Jianke Zhu,
- Abstract summary: We introduce an expressive hand avatar, named XHand, that is designed to generate hand shape, appearance, and deformations in real-time.
XHand is able to recover high-fidelity geometry and texture for hand animations across diverse poses in real-time.
- Score: 9.876680405587745
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hand avatars play a pivotal role in a wide array of digital interfaces, enhancing user immersion and facilitating natural interaction within virtual environments. While previous studies have focused on photo-realistic hand rendering, little attention has been paid to reconstruct the hand geometry with fine details, which is essential to rendering quality. In the realms of extended reality and gaming, on-the-fly rendering becomes imperative. To this end, we introduce an expressive hand avatar, named XHand, that is designed to comprehensively generate hand shape, appearance, and deformations in real-time. To obtain fine-grained hand meshes, we make use of three feature embedding modules to predict hand deformation displacements, albedo, and linear blending skinning weights, respectively. To achieve photo-realistic hand rendering on fine-grained meshes, our method employs a mesh-based neural renderer by leveraging mesh topological consistency and latent codes from embedding modules. During training, a part-aware Laplace smoothing strategy is proposed by incorporating the distinct levels of regularization to effectively maintain the necessary details and eliminate the undesired artifacts. The experimental evaluations on InterHand2.6M and DeepHandMesh datasets demonstrate the efficacy of XHand, which is able to recover high-fidelity geometry and texture for hand animations across diverse poses in real-time. To reproduce our results, we will make the full implementation publicly available at https://github.com/agnJason/XHand.
Related papers
- Fine-Grained Multi-View Hand Reconstruction Using Inverse Rendering [11.228453237603834]
We present a novel fine-grained multi-view hand mesh reconstruction method that leverages inverse rendering to restore hand poses and intricate details.
We also introduce a novel Hand Albedo and Mesh (HAM) optimization module to refine both the hand mesh and textures.
Our proposed approach outperforms the state-of-the-art methods on both reconstruction accuracy and rendering quality.
arXiv Detail & Related papers (2024-07-08T07:28:24Z) - DICE: End-to-end Deformation Capture of Hand-Face Interactions from a Single Image [98.29284902879652]
We present DICE, the first end-to-end method for Deformation-aware hand-face Interaction reCovEry from a single image.
It features disentangling the regression of local deformation fields and global mesh locations into two network branches.
It achieves state-of-the-art performance on a standard benchmark and in-the-wild data in terms of accuracy and physical plausibility.
arXiv Detail & Related papers (2024-06-26T00:08:29Z) - 3D Points Splatting for Real-Time Dynamic Hand Reconstruction [13.392046706568275]
3D Points Splatting Hand Reconstruction (3D-PSHR) is a real-time and photo-realistic hand reconstruction approach.
We propose a self-adaptive canonical points up strategy to achieve high-resolution hand geometry representation.
To model texture, we disentangle the appearance color into the intrinsic albedo and pose-aware shading.
arXiv Detail & Related papers (2023-12-21T11:50:49Z) - HandRefiner: Refining Malformed Hands in Generated Images by Diffusion-based Conditional Inpainting [72.95232302438207]
Diffusion models have achieved remarkable success in generating realistic images.
But they suffer from generating accurate human hands, such as incorrect finger counts or irregular shapes.
This paper introduces a lightweight post-processing solution called HandRefiner.
arXiv Detail & Related papers (2023-11-29T08:52:08Z) - FLARE: Fast Learning of Animatable and Relightable Mesh Avatars [64.48254296523977]
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems.
We introduce FLARE, a technique that enables the creation of animatable and relightable avatars from a single monocular video.
arXiv Detail & Related papers (2023-10-26T16:13:00Z) - HandNeRF: Neural Radiance Fields for Animatable Interacting Hands [122.32855646927013]
We propose a novel framework to reconstruct accurate appearance and geometry with neural radiance fields (NeRF) for interacting hands.
We conduct extensive experiments to verify the merits of our proposed HandNeRF and report a series of state-of-the-art results.
arXiv Detail & Related papers (2023-03-24T06:19:19Z) - HARP: Personalized Hand Reconstruction from a Monocular RGB Video [37.384221764796095]
We present HARP, a personalized hand avatar creation approach that takes a short monocular RGB video of a human hand as input.
In contrast to the major trend of neural implicit representations, HARP models a hand with a mesh-based parametric hand model.
HarP can be directly used in AR/VR applications with real-time rendering capability.
arXiv Detail & Related papers (2022-12-19T15:21:55Z) - Hand Avatar: Free-Pose Hand Animation and Rendering from Monocular Video [23.148367696192107]
We present HandAvatar, a novel representation for hand animation and rendering.
HandAvatar can generate smoothly compositional geometry and self-occlusion-aware texture.
arXiv Detail & Related papers (2022-11-23T08:50:03Z) - Towards Accurate Alignment in Real-time 3D Hand-Mesh Reconstruction [57.3636347704271]
3D hand-mesh reconstruction from RGB images facilitates many applications, including augmented reality (AR)
This paper presents a novel pipeline by decoupling the hand-mesh reconstruction task into three stages.
We can promote high-quality finger-level mesh-image alignment and drive the models together to deliver real-time predictions.
arXiv Detail & Related papers (2021-09-03T20:42:01Z) - DeepHandMesh: A Weakly-supervised Deep Encoder-Decoder Framework for
High-fidelity Hand Mesh Modeling [75.69585456580505]
DeepHandMesh is a weakly-supervised deep encoder-decoder framework for high-fidelity hand mesh modeling.
We show that our system can also be applied successfully to the 3D hand mesh estimation from general images.
arXiv Detail & Related papers (2020-08-19T00:59:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.