Temporal-Aware Self-Supervised Learning for 3D Hand Pose and Mesh
Estimation in Videos
- URL: http://arxiv.org/abs/2012.03205v1
- Date: Sun, 6 Dec 2020 07:54:18 GMT
- Title: Temporal-Aware Self-Supervised Learning for 3D Hand Pose and Mesh
Estimation in Videos
- Authors: Liangjian Chen, Shih-Yao Lin, Yusheng Xie, Yen-Yu Lin, and Xiaohui Xie
- Abstract summary: Estimating 3D hand pose directly from RGB images is challenging but has gained steady progress recently bytraining deep models with annotated 3D poses.
We propose a new framework of training3D pose estimation models from RGB images without usingexplicit 3D annotations.
- Score: 32.12879364117658
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Estimating 3D hand pose directly from RGB imagesis challenging but has gained
steady progress recently bytraining deep models with annotated 3D poses.
Howeverannotating 3D poses is difficult and as such only a few 3Dhand pose
datasets are available, all with limited samplesizes. In this study, we propose
a new framework of training3D pose estimation models from RGB images without
usingexplicit 3D annotations, i.e., trained with only 2D informa-tion. Our
framework is motivated by two observations: 1)Videos provide richer information
for estimating 3D posesas opposed to static images; 2) Estimated 3D poses
oughtto be consistent whether the videos are viewed in the for-ward order or
reverse order. We leverage these two obser-vations to develop a self-supervised
learning model calledtemporal-aware self-supervised network (TASSN). By
en-forcing temporal consistency constraints, TASSN learns 3Dhand poses and
meshes from videos with only 2D keypointposition annotations. Experiments show
that our modelachieves surprisingly good results, with 3D estimation ac-curacy
on par with the state-of-the-art models trained with3D annotations,
highlighting the benefit of the temporalconsistency in constraining 3D
prediction models.
Related papers
- SpaRP: Fast 3D Object Reconstruction and Pose Estimation from Sparse Views [36.02533658048349]
We propose a novel method, SpaRP, to reconstruct a 3D textured mesh and estimate the relative camera poses for sparse-view images.
SpaRP distills knowledge from 2D diffusion models and finetunes them to implicitly deduce the 3D spatial relationships between the sparse views.
It requires only about 20 seconds to produce a textured mesh and camera poses for the input views.
arXiv Detail & Related papers (2024-08-19T17:53:10Z) - Unsupervised Learning of Category-Level 3D Pose from Object-Centric Videos [15.532504015622159]
Category-level 3D pose estimation is a fundamentally important problem in computer vision and robotics.
We tackle the problem of learning to estimate the category-level 3D pose only from casually taken object-centric videos.
arXiv Detail & Related papers (2024-07-05T09:43:05Z) - Probing the 3D Awareness of Visual Foundation Models [56.68380136809413]
We analyze the 3D awareness of visual foundation models.
We conduct experiments using task-specific probes and zero-shot inference procedures on frozen features.
arXiv Detail & Related papers (2024-04-12T17:58:04Z) - Weakly-supervised Pre-training for 3D Human Pose Estimation via
Perspective Knowledge [36.65402869749077]
We propose a novel method to extract weak 3D information directly from 2D images without 3D pose supervision.
We propose a weakly-supervised pre-training (WSP) strategy to distinguish the depth relationship between two points in an image.
WSP achieves state-of-the-art results on two widely-used benchmarks.
arXiv Detail & Related papers (2022-11-22T03:35:15Z) - Learning Temporal 3D Human Pose Estimation with Pseudo-Labels [3.0954251281114513]
We present a simple, yet effective, approach for self-supervised 3D human pose estimation.
We rely on triangulating 2D body pose estimates of a multiple-view camera system.
Our method achieves state-of-the-art performance in the Human3.6M and MPI-INF-3DHP benchmarks.
arXiv Detail & Related papers (2021-10-14T17:40:45Z) - TriPose: A Weakly-Supervised 3D Human Pose Estimation via Triangulation
from Video [23.00696619207748]
Estimating 3D human poses from video is a challenging problem.
The lack of 3D human pose annotations is a major obstacle for supervised training and for generalization to unseen datasets.
We propose a weakly-supervised training scheme that does not require 3D annotations or calibrated cameras.
arXiv Detail & Related papers (2021-05-14T00:46:48Z) - Model-based 3D Hand Reconstruction via Self-Supervised Learning [72.0817813032385]
Reconstructing a 3D hand from a single-view RGB image is challenging due to various hand configurations and depth ambiguity.
We propose S2HAND, a self-supervised 3D hand reconstruction network that can jointly estimate pose, shape, texture, and the camera viewpoint.
For the first time, we demonstrate the feasibility of training an accurate 3D hand reconstruction network without relying on manual annotations.
arXiv Detail & Related papers (2021-03-22T10:12:43Z) - MM-Hand: 3D-Aware Multi-Modal Guided Hand Generative Network for 3D Hand
Pose Synthesis [81.40640219844197]
Estimating the 3D hand pose from a monocular RGB image is important but challenging.
A solution is training on large-scale RGB hand images with accurate 3D hand keypoint annotations.
We have developed a learning-based approach to synthesize realistic, diverse, and 3D pose-preserving hand images.
arXiv Detail & Related papers (2020-10-02T18:27:34Z) - From Image Collections to Point Clouds with Self-supervised Shape and
Pose Networks [53.71440550507745]
Reconstructing 3D models from 2D images is one of the fundamental problems in computer vision.
We propose a deep learning technique for 3D object reconstruction from a single image.
We learn both 3D point cloud reconstruction and pose estimation networks in a self-supervised manner.
arXiv Detail & Related papers (2020-05-05T04:25:16Z) - Self-Supervised 3D Human Pose Estimation via Part Guided Novel Image
Synthesis [72.34794624243281]
We propose a self-supervised learning framework to disentangle variations from unlabeled video frames.
Our differentiable formalization, bridging the representation gap between the 3D pose and spatial part maps, allows us to operate on videos with diverse camera movements.
arXiv Detail & Related papers (2020-04-09T07:55:01Z) - Exemplar Fine-Tuning for 3D Human Model Fitting Towards In-the-Wild 3D
Human Pose Estimation [107.07047303858664]
Large-scale human datasets with 3D ground-truth annotations are difficult to obtain in the wild.
We address this problem by augmenting existing 2D datasets with high-quality 3D pose fits.
The resulting annotations are sufficient to train from scratch 3D pose regressor networks that outperform the current state-of-the-art on in-the-wild benchmarks.
arXiv Detail & Related papers (2020-04-07T20:21:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.