Accurate 3D Hand Pose Estimation for Whole-Body 3D Human Mesh Estimation
- URL: http://arxiv.org/abs/2011.11534v4
- Date: Tue, 19 Apr 2022 05:59:10 GMT
- Title: Accurate 3D Hand Pose Estimation for Whole-Body 3D Human Mesh Estimation
- Authors: Gyeongsik Moon and Hongsuk Choi and Kyoung Mu Lee
- Abstract summary: Whole-body 3D human mesh estimation aims to reconstruct the 3D human body, hands, and face simultaneously.
We present Hand4Whole, which has two strong points over previous works.
Our Hand4Whole is trained in an end-to-end manner and produces much better 3D hand results than previous whole-body 3D human mesh estimation methods.
- Score: 70.23652933572647
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Whole-body 3D human mesh estimation aims to reconstruct the 3D human body,
hands, and face simultaneously. Although several methods have been proposed,
accurate prediction of 3D hands, which consist of 3D wrist and fingers, still
remains challenging due to two reasons. First, the human kinematic chain has
not been carefully considered when predicting the 3D wrists. Second, previous
works utilize body features for the 3D fingers, where the body feature barely
contains finger information. To resolve the limitations, we present Hand4Whole,
which has two strong points over previous works. First, we design Pose2Pose, a
module that utilizes joint features for 3D joint rotations. Using Pose2Pose,
Hand4Whole utilizes hand MCP joint features to predict 3D wrists as MCP joints
largely contribute to 3D wrist rotations in the human kinematic chain. Second,
Hand4Whole discards the body feature when predicting 3D finger rotations. Our
Hand4Whole is trained in an end-to-end manner and produces much better 3D hand
results than previous whole-body 3D human mesh estimation methods. The codes
are available here at https://github.com/mks0601/Hand4Whole_RELEASE.
Related papers
- Cloth2Body: Generating 3D Human Body Mesh from 2D Clothing [54.29207348918216]
Cloth2Body needs to address new and emerging challenges raised by the partial observation of the input and the high diversity of the output.
We propose an end-to-end framework that can accurately estimate 3D body mesh parameterized by pose and shape from a 2D clothing image.
As shown by experimental results, the proposed framework achieves state-of-the-art performance and can effectively recover natural and diverse 3D body meshes from 2D images.
arXiv Detail & Related papers (2023-09-28T06:18:38Z) - H3WB: Human3.6M 3D WholeBody Dataset and Benchmark [15.472137969924457]
We present a benchmark for 3D human whole-body pose estimation.
Currently, the lack of a fully annotated and accurate 3D whole-body dataset results in deep networks being trained separately on specific body parts.
We introduce the Human3.6M 3D WholeBody dataset, which provides whole-body annotations for the Human3.6M dataset.
arXiv Detail & Related papers (2022-11-28T19:00:02Z) - Tracking People by Predicting 3D Appearance, Location & Pose [78.97070307547283]
We first lift people to 3D from a single frame in a robust way.
As we track a person, we collect 3D observations over time in a tracklet representation.
We use these models to predict the future state of the tracklet.
arXiv Detail & Related papers (2021-12-08T18:57:15Z) - Learning Temporal 3D Human Pose Estimation with Pseudo-Labels [3.0954251281114513]
We present a simple, yet effective, approach for self-supervised 3D human pose estimation.
We rely on triangulating 2D body pose estimates of a multiple-view camera system.
Our method achieves state-of-the-art performance in the Human3.6M and MPI-INF-3DHP benchmarks.
arXiv Detail & Related papers (2021-10-14T17:40:45Z) - A Skeleton-Driven Neural Occupancy Representation for Articulated Hands [49.956892429789775]
Hand ArticuLated Occupancy (HALO) is a novel representation of articulated hands that bridges the advantages of 3D keypoints and neural implicit surfaces.
We demonstrate the applicability of HALO to the task of conditional generation of hands that grasp 3D objects.
arXiv Detail & Related papers (2021-09-23T14:35:19Z) - We are More than Our Joints: Predicting how 3D Bodies Move [63.34072043909123]
We train a novel variational autoencoder that generates motions from latent frequencies.
Experiments show that our method produces state-of-the-art results and realistic 3D body animations.
arXiv Detail & Related papers (2020-12-01T16:41:04Z) - Body2Hands: Learning to Infer 3D Hands from Conversational Gesture Body
Dynamics [87.17505994436308]
We build upon the insight that body motion and hand gestures are strongly correlated in non-verbal communication settings.
We formulate the learning of this prior as a prediction task of 3D hand shape over time given body motion input alone.
Our hand prediction model produces convincing 3D hand gestures given only the 3D motion of the speaker's arms as input.
arXiv Detail & Related papers (2020-07-23T22:58:15Z) - AnimePose: Multi-person 3D pose estimation and animation [9.323689681059504]
3D animation of humans in action is quite challenging as it involves using a huge setup with several motion trackers all over the person's body to track the movements of every limb.
This is time-consuming and may cause the person discomfort in wearing exoskeleton body suits with motion sensors.
We present a solution to generate 3D animation of multiple persons from a 2D video using deep learning.
arXiv Detail & Related papers (2020-02-06T11:11:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.