A Dataset of Relighted 3D Interacting Hands
- URL: http://arxiv.org/abs/2310.17768v1
- Date: Thu, 26 Oct 2023 20:26:50 GMT
- Title: A Dataset of Relighted 3D Interacting Hands
- Authors: Gyeongsik Moon, Shunsuke Saito, Weipeng Xu, Rohan Joshi, Julia
Buffalini, Harley Bellan, Nicholas Rosen, Jesse Richardson, Mallorie Mize,
Philippe de Bree, Tomas Simon, Bo Peng, Shubham Garg, Kevyn McPhail, Takaaki
Shiratori
- Abstract summary: Re:InterHand is a dataset of relighted 3D interacting hands.
We employ a state-of-the-art hand relighting network with our accurately tracked two-hand 3D poses.
- Score: 37.31717123107306
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The two-hand interaction is one of the most challenging signals to analyze
due to the self-similarity, complicated articulations, and occlusions of hands.
Although several datasets have been proposed for the two-hand interaction
analysis, all of them do not achieve 1) diverse and realistic image appearances
and 2) diverse and large-scale groundtruth (GT) 3D poses at the same time. In
this work, we propose Re:InterHand, a dataset of relighted 3D interacting hands
that achieve the two goals. To this end, we employ a state-of-the-art hand
relighting network with our accurately tracked two-hand 3D poses. We compare
our Re:InterHand with existing 3D interacting hands datasets and show the
benefit of it. Our Re:InterHand is available in
https://mks0601.github.io/ReInterHand/.
Related papers
- ARCTIC: A Dataset for Dexterous Bimanual Hand-Object Manipulation [68.80339307258835]
ARCTIC is a dataset of two hands that dexterously manipulate objects.
It contains 2.1M video frames paired with accurate 3D hand meshes and detailed, dynamic contact information.
arXiv Detail & Related papers (2022-04-28T17:23:59Z) - RGB2Hands: Real-Time Tracking of 3D Hand Interactions from Monocular RGB
Video [76.86512780916827]
We present the first real-time method for motion capture of skeletal pose and 3D surface geometry of hands from a single RGB camera.
In order to address the inherent depth ambiguities in RGB data, we propose a novel multi-task CNN.
We experimentally verify the individual components of our RGB two-hand tracking and 3D reconstruction pipeline.
arXiv Detail & Related papers (2021-06-22T12:53:56Z) - HandsFormer: Keypoint Transformer for Monocular 3D Pose Estimation
ofHands and Object in Interaction [33.661745138578596]
We propose a robust and accurate method for estimating the 3D poses of two hands in close interaction from a single color image.
Our method starts by extracting a set of potential 2D locations for the joints of both hands as extrema of a heatmap.
We use appearance and spatial encodings of these locations as input to a transformer, and leverage the attention mechanisms to sort out the correct configuration of the joints.
arXiv Detail & Related papers (2021-04-29T20:19:20Z) - H2O: Two Hands Manipulating Objects for First Person Interaction
Recognition [70.46638409156772]
We present a comprehensive framework for egocentric interaction recognition using markerless 3D annotations of two hands manipulating objects.
Our method produces annotations of the 3D pose of two hands and the 6D pose of the manipulated objects, along with their interaction labels for each frame.
Our dataset, called H2O (2 Hands and Objects), provides synchronized multi-view RGB-D images, interaction labels, object classes, ground-truth 3D poses for left & right hands, 6D object poses, ground-truth camera poses, object meshes and scene point clouds.
arXiv Detail & Related papers (2021-04-22T17:10:42Z) - MM-Hand: 3D-Aware Multi-Modal Guided Hand Generative Network for 3D Hand
Pose Synthesis [81.40640219844197]
Estimating the 3D hand pose from a monocular RGB image is important but challenging.
A solution is training on large-scale RGB hand images with accurate 3D hand keypoint annotations.
We have developed a learning-based approach to synthesize realistic, diverse, and 3D pose-preserving hand images.
arXiv Detail & Related papers (2020-10-02T18:27:34Z) - InterHand2.6M: A Dataset and Baseline for 3D Interacting Hand Pose
Estimation from a Single RGB Image [71.17227941339935]
We propose a large-scale dataset, InterHand2.6M, and a network, InterNet, for 3D interacting hand pose estimation from a single RGB image.
In our experiments, we demonstrate big gains in 3D interacting hand pose estimation accuracy when leveraging the interacting hand data in InterHand2.6M.
We also report the accuracy of InterNet on InterHand2.6M, which serves as a strong baseline for this new dataset.
arXiv Detail & Related papers (2020-08-21T05:15:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.