Ego-Body Pose Estimation via Ego-Head Pose Estimation
- URL: http://arxiv.org/abs/2212.04636v3
- Date: Mon, 28 Aug 2023 02:51:25 GMT
- Title: Ego-Body Pose Estimation via Ego-Head Pose Estimation
- Authors: Jiaman Li, C. Karen Liu, Jiajun Wu
- Abstract summary: Estimating 3D human motion from an egocentric video sequence plays a critical role in human behavior understanding and has various applications in VR/AR.
We propose a new method, Ego-Body Pose Estimation via Ego-Head Pose Estimation (EgoEgo), which decomposes the problem into two stages, connected by the head motion as an intermediate representation.
This disentanglement of head and body pose eliminates the need for training datasets with paired egocentric videos and 3D human motion.
- Score: 22.08240141115053
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Estimating 3D human motion from an egocentric video sequence plays a critical
role in human behavior understanding and has various applications in VR/AR.
However, naively learning a mapping between egocentric videos and human motions
is challenging, because the user's body is often unobserved by the front-facing
camera placed on the head of the user. In addition, collecting large-scale,
high-quality datasets with paired egocentric videos and 3D human motions
requires accurate motion capture devices, which often limit the variety of
scenes in the videos to lab-like environments. To eliminate the need for paired
egocentric video and human motions, we propose a new method, Ego-Body Pose
Estimation via Ego-Head Pose Estimation (EgoEgo), which decomposes the problem
into two stages, connected by the head motion as an intermediate
representation. EgoEgo first integrates SLAM and a learning approach to
estimate accurate head motion. Subsequently, leveraging the estimated head pose
as input, EgoEgo utilizes conditional diffusion to generate multiple plausible
full-body motions. This disentanglement of head and body pose eliminates the
need for training datasets with paired egocentric videos and 3D human motion,
enabling us to leverage large-scale egocentric video datasets and motion
capture datasets separately. Moreover, for systematic benchmarking, we develop
a synthetic dataset, AMASS-Replica-Ego-Syn (ARES), with paired egocentric
videos and human motion. On both ARES and real data, our EgoEgo model performs
significantly better than the current state-of-the-art methods.
Related papers
- Estimating Body and Hand Motion in an Ego-sensed World [64.08911275906544]
We present EgoAllo, a system for human motion estimation from a head-mounted device.
Using only egocentric SLAM poses and images, EgoAllo guides sampling from a conditional diffusion model to estimate 3D body pose, height, and hand parameters.
arXiv Detail & Related papers (2024-10-04T17:59:57Z) - EgoAvatar: Egocentric View-Driven and Photorealistic Full-body Avatars [56.56236652774294]
We propose a person-specific egocentric telepresence approach, which jointly models the photoreal digital avatar while also driving it from a single egocentric video.
Our experiments demonstrate a clear step towards egocentric and photoreal telepresence as our method outperforms baselines as well as competing methods.
arXiv Detail & Related papers (2024-09-22T22:50:27Z) - EgoGaussian: Dynamic Scene Understanding from Egocentric Video with 3D Gaussian Splatting [95.44545809256473]
EgoGaussian is a method capable of simultaneously reconstructing 3D scenes and dynamically tracking 3D object motion from RGB egocentric input alone.
We show significant improvements in terms of both dynamic object and background reconstruction quality compared to the state-of-the-art.
arXiv Detail & Related papers (2024-06-28T10:39:36Z) - EMAG: Ego-motion Aware and Generalizable 2D Hand Forecasting from Egocentric Videos [9.340890244344497]
Existing methods for forecasting 2D hand positions rely on visual representations and mainly focus on hand-object interactions.
We propose EMAG, an ego-motion-aware and generalizable 2D hand forecasting method.
Our model outperforms prior methods by 1.7% and 7.0% on intra and cross-dataset evaluations.
arXiv Detail & Related papers (2024-05-30T13:15:18Z) - EgoGen: An Egocentric Synthetic Data Generator [53.32942235801499]
EgoGen is a new synthetic data generator that can produce accurate and rich ground-truth training data for egocentric perception tasks.
At the heart of EgoGen is a novel human motion synthesis model that directly leverages egocentric visual inputs of a virtual human to sense the 3D environment.
We demonstrate EgoGen's efficacy in three tasks: mapping and localization for head-mounted cameras, egocentric camera tracking, and human mesh recovery from egocentric views.
arXiv Detail & Related papers (2024-01-16T18:55:22Z) - 3D Human Pose Perception from Egocentric Stereo Videos [67.9563319914377]
We propose a new transformer-based framework to improve egocentric stereo 3D human pose estimation.
Our method is able to accurately estimate human poses even in challenging scenarios, such as crouching and sitting.
We will release UnrealEgo2, UnrealEgo-RW, and trained models on our project page.
arXiv Detail & Related papers (2023-12-30T21:21:54Z) - EgoHumans: An Egocentric 3D Multi-Human Benchmark [37.375846688453514]
We present EgoHumans, a new multi-view multi-human video benchmark to advance the state-of-the-art of egocentric human 3D pose estimation and tracking.
We propose a novel 3D capture setup to construct a comprehensive egocentric multi-human benchmark in the wild.
We leverage consumer-grade wearable camera-equipped glasses for the egocentric view, which enables us to capture dynamic activities like playing tennis, fencing, volleyball, etc.
arXiv Detail & Related papers (2023-05-25T21:37:36Z) - UnrealEgo: A New Dataset for Robust Egocentric 3D Human Motion Capture [70.59984501516084]
UnrealEgo is a new large-scale naturalistic dataset for egocentric 3D human pose estimation.
It is based on an advanced concept of eyeglasses equipped with two fisheye cameras that can be used in unconstrained environments.
We propose a new benchmark method with a simple but effective idea of devising a 2D keypoint estimation module for stereo inputs to improve 3D human pose estimation.
arXiv Detail & Related papers (2022-08-02T17:59:54Z) - 4D Human Body Capture from Egocentric Video via 3D Scene Grounding [38.3169520384642]
We introduce a novel task of reconstructing a time series of second-person 3D human body meshes from monocular egocentric videos.
The unique viewpoint and rapid embodied camera motion of egocentric videos raise additional technical barriers for human body capture.
arXiv Detail & Related papers (2020-11-26T15:17:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.