Fusing Wearable IMUs with Multi-View Images for Human Pose Estimation: A
Geometric Approach
- URL: http://arxiv.org/abs/2003.11163v2
- Date: Fri, 10 Apr 2020 05:48:51 GMT
- Title: Fusing Wearable IMUs with Multi-View Images for Human Pose Estimation: A
Geometric Approach
- Authors: Zhe Zhang, Chunyu Wang, Wenhu Qin, Wenjun Zeng
- Abstract summary: We propose to estimate 3D human pose from multi-view images and a few IMUs attached at person's limbs.
It operates by firstly detecting 2D poses from the two signals, and then lifting them to the 3D space.
The simple two-step approach reduces the error of the state-of-the-art by a large margin on a public dataset.
- Score: 76.10879433430466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose to estimate 3D human pose from multi-view images and a few IMUs
attached at person's limbs. It operates by firstly detecting 2D poses from the
two signals, and then lifting them to the 3D space. We present a geometric
approach to reinforce the visual features of each pair of joints based on the
IMUs. This notably improves 2D pose estimation accuracy especially when one
joint is occluded. We call this approach Orientation Regularized Network (ORN).
Then we lift the multi-view 2D poses to the 3D space by an Orientation
Regularized Pictorial Structure Model (ORPSM) which jointly minimizes the
projection error between the 3D and 2D poses, along with the discrepancy
between the 3D pose and IMU orientations. The simple two-step approach reduces
the error of the state-of-the-art by a large margin on a public dataset. Our
code will be released at https://github.com/CHUNYUWANG/imu-human-pose-pytorch.
Related papers
- MPL: Lifting 3D Human Pose from Multi-view 2D Poses [75.26416079541723]
We propose combining 2D pose estimation, for which large and rich training datasets exist, and 2D-to-3D pose lifting, using a transformer-based network.
Our experiments demonstrate decreases up to 45% in MPJPE errors compared to the 3D pose obtained by triangulating the 2D poses.
arXiv Detail & Related papers (2024-08-20T12:55:14Z) - Unsupervised Multi-Person 3D Human Pose Estimation From 2D Poses Alone [4.648549457266638]
We present one of the first studies investigating the feasibility of unsupervised multi-person 2D-3D pose estimation.
Our method involves independently lifting each subject's 2D pose to 3D, before combining them in a shared 3D coordinate system.
This by itself enables us to retrieve an accurate 3D reconstruction of their poses.
arXiv Detail & Related papers (2023-09-26T11:42:56Z) - MPM: A Unified 2D-3D Human Pose Representation via Masked Pose Modeling [59.74064212110042]
mpmcan handle multiple tasks including 3D human pose estimation, 3D pose estimation from cluded 2D pose, and 3D pose completion in a textocbfsingle framework.
We conduct extensive experiments and ablation studies on several widely used human pose datasets and achieve state-of-the-art performance on MPI-INF-3DHP.
arXiv Detail & Related papers (2023-06-29T10:30:00Z) - Shape-aware Multi-Person Pose Estimation from Multi-View Images [47.13919147134315]
Our proposed coarse-to-fine pipeline first aggregates noisy 2D observations from multiple camera views into 3D space.
The final pose estimates are attained from a novel optimization scheme which links high-confidence multi-view 2D observations and 3D joint candidates.
arXiv Detail & Related papers (2021-10-05T20:04:21Z) - VoxelTrack: Multi-Person 3D Human Pose Estimation and Tracking in the
Wild [98.69191256693703]
We present VoxelTrack for multi-person 3D pose estimation and tracking from a few cameras which are separated by wide baselines.
It employs a multi-branch network to jointly estimate 3D poses and re-identification (Re-ID) features for all people in the environment.
It outperforms the state-of-the-art methods by a large margin on three public datasets including Shelf, Campus and CMU Panoptic.
arXiv Detail & Related papers (2021-08-05T08:35:44Z) - SVMA: A GAN-based model for Monocular 3D Human Pose Estimation [0.8379286663107844]
We present an unsupervised GAN-based model to recover 3D human pose from 2D joint locations extracted from a single image.
Considering the reprojection constraint, our model can estimate the camera so that we can reproject the estimated 3D pose to the original 2D pose.
Results on Human3.6M show that our method outperforms all the state-of-the-art methods, and results on MPI-INF-3DHP show that our method outperforms state-of-the-art by approximately 15.0%.
arXiv Detail & Related papers (2021-06-10T09:43:57Z) - SMAP: Single-Shot Multi-Person Absolute 3D Pose Estimation [46.85865451812981]
We propose a novel system that first regresses a set of 2.5D representations of body parts and then reconstructs the 3D absolute poses based on these 2.5D representations with a depth-aware part association algorithm.
Such a single-shot bottom-up scheme allows the system to better learn and reason about the inter-person depth relationship, improving both 3D and 2D pose estimation.
arXiv Detail & Related papers (2020-08-26T09:56:07Z) - Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh
Recovery from a 2D Human Pose [70.23652933572647]
We propose a novel graph convolutional neural network (GraphCNN)-based system that estimates the 3D coordinates of human mesh vertices directly from the 2D human pose.
We show that our Pose2Mesh outperforms the previous 3D human pose and mesh estimation methods on various benchmark datasets.
arXiv Detail & Related papers (2020-08-20T16:01:56Z) - VoxelPose: Towards Multi-Camera 3D Human Pose Estimation in Wild
Environment [80.77351380961264]
We present an approach to estimate 3D poses of multiple people from multiple camera views.
We present an end-to-end solution which operates in the $3$D space, therefore avoids making incorrect decisions in the 2D space.
We propose Pose Regression Network (PRN) to estimate a detailed 3D pose for each proposal.
arXiv Detail & Related papers (2020-04-13T23:50:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.