3D Points Splatting for Real-Time Dynamic Hand Reconstruction
- URL: http://arxiv.org/abs/2312.13770v1
- Date: Thu, 21 Dec 2023 11:50:49 GMT
- Title: 3D Points Splatting for Real-Time Dynamic Hand Reconstruction
- Authors: Zheheng Jiang, Hossein Rahmani, Sue Black, Bryan M. Williams
- Abstract summary: 3D Points Splatting Hand Reconstruction (3D-PSHR) is a real-time and photo-realistic hand reconstruction approach.
We propose a self-adaptive canonical points up strategy to achieve high-resolution hand geometry representation.
To model texture, we disentangle the appearance color into the intrinsic albedo and pose-aware shading.
- Score: 13.392046706568275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present 3D Points Splatting Hand Reconstruction (3D-PSHR), a real-time and
photo-realistic hand reconstruction approach. We propose a self-adaptive
canonical points upsampling strategy to achieve high-resolution hand geometry
representation. This is followed by a self-adaptive deformation that deforms
the hand from the canonical space to the target pose, adapting to the dynamic
changing of canonical points which, in contrast to the common practice of
subdividing the MANO model, offers greater flexibility and results in improved
geometry fitting. To model texture, we disentangle the appearance color into
the intrinsic albedo and pose-aware shading, which are learned through a
Context-Attention module. Moreover, our approach allows the geometric and the
appearance models to be trained simultaneously in an end-to-end manner. We
demonstrate that our method is capable of producing animatable, photorealistic
and relightable hand reconstructions using multiple datasets, including
monocular videos captured with handheld smartphones and large-scale multi-view
videos featuring various hand poses. We also demonstrate that our approach
achieves real-time rendering speeds while simultaneously maintaining superior
performance compared to existing state-of-the-art methods.
Related papers
- EasyHOI: Unleashing the Power of Large Models for Reconstructing Hand-Object Interactions in the Wild [79.71523320368388]
Our work aims to reconstruct hand-object interactions from a single-view image.
We first design a novel pipeline to estimate the underlying hand pose and object shape.
With the initial reconstruction, we employ a prior-guided optimization scheme.
arXiv Detail & Related papers (2024-11-21T16:33:35Z) - GGRt: Towards Pose-free Generalizable 3D Gaussian Splatting in Real-time [112.32349668385635]
GGRt is a novel approach to generalizable novel view synthesis that alleviates the need for real camera poses.
As the first pose-free generalizable 3D-GS framework, GGRt achieves inference at $ge$ 5 FPS and real-time rendering at $ge$ 100 FPS.
arXiv Detail & Related papers (2024-03-15T09:47:35Z) - Disjoint Pose and Shape for 3D Face Reconstruction [4.096453902709292]
We propose an end-to-end pipeline that disjointly solves for pose and shape to make the optimization stable and accurate.
The proposed method achieves end-to-end topological consistency, enables iterative face pose refinement procedure, and show remarkable improvement on both quantitative and qualitative results.
arXiv Detail & Related papers (2023-08-26T15:18:32Z) - gSDF: Geometry-Driven Signed Distance Functions for 3D Hand-Object
Reconstruction [94.46581592405066]
We exploit the hand structure and use it as guidance for SDF-based shape reconstruction.
We predict kinematic chains of pose transformations and align SDFs with highly-articulated hand poses.
arXiv Detail & Related papers (2023-04-24T10:05:48Z) - HandNeRF: Neural Radiance Fields for Animatable Interacting Hands [122.32855646927013]
We propose a novel framework to reconstruct accurate appearance and geometry with neural radiance fields (NeRF) for interacting hands.
We conduct extensive experiments to verify the merits of our proposed HandNeRF and report a series of state-of-the-art results.
arXiv Detail & Related papers (2023-03-24T06:19:19Z) - PaMIR: Parametric Model-Conditioned Implicit Representation for
Image-based Human Reconstruction [67.08350202974434]
We propose Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit function.
We show that our method achieves state-of-the-art performance for image-based 3D human reconstruction in the cases of challenging poses and clothing types.
arXiv Detail & Related papers (2020-07-08T02:26:19Z) - Leveraging Photometric Consistency over Time for Sparsely Supervised
Hand-Object Reconstruction [118.21363599332493]
We present a method to leverage photometric consistency across time when annotations are only available for a sparse subset of frames in a video.
Our model is trained end-to-end on color images to jointly reconstruct hands and objects in 3D by inferring their poses.
We achieve state-of-the-art results on 3D hand-object reconstruction benchmarks and demonstrate that our approach allows us to improve the pose estimation accuracy.
arXiv Detail & Related papers (2020-04-28T12:03:14Z) - Learning Generative Models of Shape Handles [43.41382075567803]
We present a generative model to synthesize 3D shapes as sets of handles.
Our model can generate handle sets with varying cardinality and different types of handles.
We show that the resulting shape representations are intuitive and achieve superior quality than previous state-of-the-art.
arXiv Detail & Related papers (2020-04-06T22:35:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.