Multi-View Matching (MVM): Facilitating Multi-Person 3D Pose Estimation
Learning with Action-Frozen People Video
- URL: http://arxiv.org/abs/2004.05275v1
- Date: Sat, 11 Apr 2020 01:09:50 GMT
- Title: Multi-View Matching (MVM): Facilitating Multi-Person 3D Pose Estimation
Learning with Action-Frozen People Video
- Authors: Yeji Shen, C.-C. Jay Kuo
- Abstract summary: MVM method generates reliable 3D human poses from a large-scale video dataset.
We train a neural network that takes a single image as the input for multi-person 3D pose estimation.
- Score: 38.63662549684785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To tackle the challeging problem of multi-person 3D pose estimation from a
single image, we propose a multi-view matching (MVM) method in this work. The
MVM method generates reliable 3D human poses from a large-scale video dataset,
called the Mannequin dataset, that contains action-frozen people immitating
mannequins. With a large amount of in-the-wild video data labeled by 3D
supervisions automatically generated by MVM, we are able to train a neural
network that takes a single image as the input for multi-person 3D pose
estimation. The core technology of MVM lies in effective alignment of 2D poses
obtained from multiple views of a static scene that has a strong geometric
constraint. Our objective is to maximize mutual consistency of 2D poses
estimated in multiple frames, where geometric constraints as well as appearance
similarities are taken into account simultaneously. To demonstrate the
effectiveness of 3D supervisions provided by the MVM method, we conduct
experiments on the 3DPW and the MSCOCO datasets and show that our proposed
solution offers the state-of-the-art performance.
Related papers
- Self-learning Canonical Space for Multi-view 3D Human Pose Estimation [57.969696744428475]
Multi-view 3D human pose estimation is naturally superior to single view one.
The accurate annotation of these information is hard to obtain.
We propose a fully self-supervised framework, named cascaded multi-view aggregating network (CMANet)
CMANet is superior to state-of-the-art methods in extensive quantitative and qualitative analysis.
arXiv Detail & Related papers (2024-03-19T04:54:59Z) - MM-Point: Multi-View Information-Enhanced Multi-Modal Self-Supervised 3D
Point Cloud Understanding [4.220064723125481]
Multi-view 2D information can provide superior self-supervised signals for 3D objects.
MM-Point is driven by intra-modal and inter-modal similarity objectives.
It achieves a peak accuracy of 92.4% on the synthetic dataset ModelNet40, and a top accuracy of 87.8% on the real-world dataset ScanObjectNN.
arXiv Detail & Related papers (2024-02-15T15:10:17Z) - Direct Multi-view Multi-person 3D Pose Estimation [138.48139701871213]
We present Multi-view Pose transformer (MvP) for estimating multi-person 3D poses from multi-view images.
MvP directly regresses the multi-person 3D poses in a clean and efficient way, without relying on intermediate tasks.
We show experimentally that our MvP model outperforms the state-of-the-art methods on several benchmarks while being much more efficient.
arXiv Detail & Related papers (2021-11-07T13:09:20Z) - Graph-Based 3D Multi-Person Pose Estimation Using Multi-View Images [79.70127290464514]
We decompose the task into two stages, i.e. person localization and pose estimation.
And we propose three task-specific graph neural networks for effective message passing.
Our approach achieves state-of-the-art performance on CMU Panoptic and Shelf datasets.
arXiv Detail & Related papers (2021-09-13T11:44:07Z) - Unsupervised Cross-Modal Alignment for Multi-Person 3D Pose Estimation [52.94078950641959]
We present a deployment friendly, fast bottom-up framework for multi-person 3D human pose estimation.
We adopt a novel neural representation of multi-person 3D pose which unifies the position of person instances with their corresponding 3D pose representation.
We propose a practical deployment paradigm where paired 2D or 3D pose annotations are unavailable.
arXiv Detail & Related papers (2020-08-04T07:54:25Z) - Weakly-Supervised 3D Human Pose Learning via Multi-view Images in the
Wild [101.70320427145388]
We propose a weakly-supervised approach that does not require 3D annotations and learns to estimate 3D poses from unlabeled multi-view data.
We evaluate our proposed approach on two large scale datasets.
arXiv Detail & Related papers (2020-03-17T08:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.