3D Human Pose Estimation for Free-form Activity Using WiFi Signals
- URL: http://arxiv.org/abs/2110.08314v1
- Date: Fri, 15 Oct 2021 18:47:16 GMT
- Title: 3D Human Pose Estimation for Free-form Activity Using WiFi Signals
- Authors: Yili Ren and Jie Yang
- Abstract summary: Winect is a 3D human pose tracking system for free-form activity using commodity WiFi devices.
Our system tracks free-form activity by estimating a 3D skeleton pose that consists of a set of joints of the human body.
- Score: 5.2245900672091
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: WiFi human sensing has become increasingly attractive in enabling emerging
human-computer interaction applications. The corresponding technique has
gradually evolved from the classification of multiple activity types to more
fine-grained tracking of 3D human poses. However, existing WiFi-based 3D human
pose tracking is limited to a set of predefined activities. In this work, we
present Winect, a 3D human pose tracking system for free-form activity using
commodity WiFi devices. Our system tracks free-form activity by estimating a 3D
skeleton pose that consists of a set of joints of the human body. In
particular, we combine signal separation and joint movement modeling to achieve
free-form activity tracking. Our system first identifies the moving limbs by
leveraging the two-dimensional angle of arrival of the signals reflected off
the human body and separates the entangled signals for each limb. Then, it
tracks each limb and constructs a 3D skeleton of the body by modeling the
inherent relationship between the movements of the limb and the corresponding
joints. Our evaluation results show that Winect is environment-independent and
achieves centimeter-level accuracy for free-form activity tracking under
various challenging environments including the none-line-of-sight (NLoS)
scenarios.
Related papers
- Sitcom-Crafter: A Plot-Driven Human Motion Generation System in 3D Scenes [83.55301458112672]
Sitcom-Crafter is a system for human motion generation in 3D space.
Central to the function generation modules is our novel 3D scene-aware human-human interaction module.
Augmentation modules encompass plot comprehension for command generation, motion synchronization for seamless integration of different motion types.
arXiv Detail & Related papers (2024-10-14T17:56:19Z) - Towards Precise 3D Human Pose Estimation with Multi-Perspective Spatial-Temporal Relational Transformers [28.38686299271394]
We propose a framework for 3D sequence-to-sequence (seq2seq) human pose detection.
Firstly, the spatial module represents the human pose feature by intra-image content, while the frame-image relation module extracts temporal relationships.
Our method is evaluated on Human3.6M, a popular 3D human pose detection dataset.
arXiv Detail & Related papers (2024-01-30T03:00:25Z) - Intelligent Knee Sleeves: A Real-time Multimodal Dataset for 3D Lower
Body Motion Estimation Using Smart Textile [2.2008680042670123]
We present a multimodal dataset with benchmarks collected using a novel pair of Intelligent Knee Sleeves for human pose estimation.
Our system utilizes synchronized datasets that comprise time-series data from the Knee Sleeves and the corresponding ground truth labels from the visualized motion capture camera system.
We employ these to generate 3D human models solely based on the wearable data of individuals performing different activities.
arXiv Detail & Related papers (2023-10-02T00:34:21Z) - GRIP: Generating Interaction Poses Using Spatial Cues and Latent Consistency [57.9920824261925]
Hands are dexterous and highly versatile manipulators that are central to how humans interact with objects and their environment.
modeling realistic hand-object interactions is critical for applications in computer graphics, computer vision, and mixed reality.
GRIP is a learning-based method that takes as input the 3D motion of the body and the object, and synthesizes realistic motion for both hands before, during, and after object interaction.
arXiv Detail & Related papers (2023-08-22T17:59:51Z) - Task-Oriented Human-Object Interactions Generation with Implicit Neural
Representations [61.659439423703155]
TOHO: Task-Oriented Human-Object Interactions Generation with Implicit Neural Representations.
Our method generates continuous motions that are parameterized only by the temporal coordinate.
This work takes a step further toward general human-scene interaction simulation.
arXiv Detail & Related papers (2023-03-23T09:31:56Z) - 3D Human Pose Estimation for Free-from and Moving Activities Using WiFi [7.80781386916681]
GoPose is a 3D skeleton-based human pose estimation system that uses WiFi devices at home.
Our system does not require a user to wear or carry any sensors and can reuse the WiFi devices that already exist in a home environment for mass adoption.
arXiv Detail & Related papers (2022-04-16T21:58:24Z) - LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human
Bodies [78.17425779503047]
We propose a novel neural implicit representation for the human body.
It is fully differentiable and optimizable with disentangled shape and pose latent spaces.
Our model can be trained and fine-tuned directly on non-watertight raw data with well-designed losses.
arXiv Detail & Related papers (2021-11-30T04:10:57Z) - TRiPOD: Human Trajectory and Pose Dynamics Forecasting in the Wild [77.59069361196404]
TRiPOD is a novel method for predicting body dynamics based on graph attentional networks.
To incorporate a real-world challenge, we learn an indicator representing whether an estimated body joint is visible/invisible at each frame.
Our evaluation shows that TRiPOD outperforms all prior work and state-of-the-art specifically designed for each of the trajectory and pose forecasting tasks.
arXiv Detail & Related papers (2021-04-08T20:01:00Z) - Human POSEitioning System (HPS): 3D Human Pose Estimation and
Self-localization in Large Scenes from Body-Mounted Sensors [71.29186299435423]
We introduce (HPS) Human POSEitioning System, a method to recover the full 3D pose of a human registered with a 3D scan of the surrounding environment.
We show that our optimization-based integration exploits the benefits of the two, resulting in pose accuracy free of drift.
HPS could be used for VR/AR applications where humans interact with the scene without requiring direct line of sight with an external camera.
arXiv Detail & Related papers (2021-03-31T17:58:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.