PHRIT: Parametric Hand Representation with Implicit Template
- URL: http://arxiv.org/abs/2309.14916v1
- Date: Tue, 26 Sep 2023 13:22:33 GMT
- Title: PHRIT: Parametric Hand Representation with Implicit Template
- Authors: Zhisheng Huang, Yujin Chen, Di Kang, Jinlu Zhang, Zhigang Tu
- Abstract summary: PHRIT is a novel approach for parametric hand mesh modeling with an implicit template.
Our method represents deformable hand shapes using signed distance fields (SDFs) with part-based shape priors.
We evaluate PHRIT on multiple downstream tasks, including skeleton-driven hand reconstruction, shapes from point clouds, and single-view 3D reconstruction.
- Score: 24.699079936958892
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose PHRIT, a novel approach for parametric hand mesh modeling with an
implicit template that combines the advantages of both parametric meshes and
implicit representations. Our method represents deformable hand shapes using
signed distance fields (SDFs) with part-based shape priors, utilizing a
deformation field to execute the deformation. The model offers efficient
high-fidelity hand reconstruction by deforming the canonical template at
infinite resolution. Additionally, it is fully differentiable and can be easily
used in hand modeling since it can be driven by the skeleton and shape latent
codes. We evaluate PHRIT on multiple downstream tasks, including
skeleton-driven hand reconstruction, shapes from point clouds, and single-view
3D reconstruction, demonstrating that our approach achieves realistic and
immersive hand modeling with state-of-the-art performance.
Related papers
- 3D Points Splatting for Real-Time Dynamic Hand Reconstruction [13.392046706568275]
3D Points Splatting Hand Reconstruction (3D-PSHR) is a real-time and photo-realistic hand reconstruction approach.
We propose a self-adaptive canonical points up strategy to achieve high-resolution hand geometry representation.
To model texture, we disentangle the appearance color into the intrinsic albedo and pose-aware shading.
arXiv Detail & Related papers (2023-12-21T11:50:49Z) - Overcoming the Trade-off Between Accuracy and Plausibility in 3D Hand
Shape Reconstruction [62.96478903239799]
Direct mesh fitting for 3D hand shape reconstruction is highly accurate.
However, the reconstructed meshes are prone to artifacts and do not appear as plausible hand shapes.
We introduce a novel weakly-supervised hand shape estimation framework that integrates non-parametric mesh fitting with MANO model in an end-to-end fashion.
arXiv Detail & Related papers (2023-05-01T03:38:01Z) - AlignSDF: Pose-Aligned Signed Distance Fields for Hand-Object
Reconstruction [76.12874759788298]
We propose a joint learning framework that disentangles the pose and the shape.
We show that such aligned SDFs better focus on reconstructing shape details and improve reconstruction accuracy both for hands and objects.
arXiv Detail & Related papers (2022-07-26T13:58:59Z) - A Model for Multi-View Residual Covariances based on Perspective
Deformation [88.21738020902411]
We derive a model for the covariance of the visual residuals in multi-view SfM, odometry and SLAM setups.
We validate our model with synthetic and real data and integrate it into photometric and feature-based Bundle Adjustment.
arXiv Detail & Related papers (2022-02-01T21:21:56Z) - SPAMs: Structured Implicit Parametric Models [30.19414242608965]
We learn Structured-implicit PArametric Models (SPAMs) as a deformable object representation that structurally decomposes non-rigid object motion into part-based disentangled representations of shape and pose.
Experiments demonstrate that our part-aware shape and pose understanding lead to state-of-the-art performance in reconstruction and tracking of depth sequences of complex deforming object motion.
arXiv Detail & Related papers (2022-01-20T12:33:46Z) - Deep Implicit Templates for 3D Shape Representation [70.9789507686618]
We propose a new 3D shape representation that supports explicit correspondence reasoning in deep implicit representations.
Our key idea is to formulate DIFs as conditional deformations of a template implicit function.
We show that our method can not only learn a common implicit template for a collection of shapes, but also establish dense correspondences across all the shapes simultaneously without any supervision.
arXiv Detail & Related papers (2020-11-30T06:01:49Z) - PaMIR: Parametric Model-Conditioned Implicit Representation for
Image-based Human Reconstruction [67.08350202974434]
We propose Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit function.
We show that our method achieves state-of-the-art performance for image-based 3D human reconstruction in the cases of challenging poses and clothing types.
arXiv Detail & Related papers (2020-07-08T02:26:19Z) - Learning Generative Models of Shape Handles [43.41382075567803]
We present a generative model to synthesize 3D shapes as sets of handles.
Our model can generate handle sets with varying cardinality and different types of handles.
We show that the resulting shape representations are intuitive and achieve superior quality than previous state-of-the-art.
arXiv Detail & Related papers (2020-04-06T22:35:55Z) - Monocular Human Pose and Shape Reconstruction using Part Differentiable
Rendering [53.16864661460889]
Recent works succeed in regression-based methods which estimate parametric models directly through a deep neural network supervised by 3D ground truth.
In this paper, we introduce body segmentation as critical supervision.
To improve the reconstruction with part segmentation, we propose a part-level differentiable part that enables part-based models to be supervised by part segmentation.
arXiv Detail & Related papers (2020-03-24T14:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.