3D Human Pose Estimation for Free-from and Moving Activities Using WiFi
- URL: http://arxiv.org/abs/2204.07878v1
- Date: Sat, 16 Apr 2022 21:58:24 GMT
- Title: 3D Human Pose Estimation for Free-from and Moving Activities Using WiFi
- Authors: Yili Ren and Jie Yang
- Abstract summary: GoPose is a 3D skeleton-based human pose estimation system that uses WiFi devices at home.
Our system does not require a user to wear or carry any sensors and can reuse the WiFi devices that already exist in a home environment for mass adoption.
- Score: 7.80781386916681
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents GoPose, a 3D skeleton-based human pose estimation system
that uses WiFi devices at home. Our system leverages the WiFi signals reflected
off the human body for 3D pose estimation. In contrast to prior systems that
need specialized hardware or dedicated sensors, our system does not require a
user to wear or carry any sensors and can reuse the WiFi devices that already
exist in a home environment for mass adoption. To realize such a system, we
leverage the 2D AoA spectrum of the signals reflected from the human body and
the deep learning techniques. In particular, the 2D AoA spectrum is proposed to
locate different parts of the human body as well as to enable
environment-independent pose estimation. Deep learning is incorporated to model
the complex relationship between the 2D AoA spectrums and the 3D skeletons of
the human body for pose tracking. Our evaluation results show GoPose achieves
around 4.7cm of accuracy under various scenarios including tracking unseen
activities and under NLoS scenarios.
Related papers
- Exploring 3D Human Pose Estimation and Forecasting from the Robot's Perspective: The HARPER Dataset [52.22758311559]
We introduce HARPER, a novel dataset for 3D body pose estimation and forecast in dyadic interactions between users and Spot.
The key-novelty is the focus on the robot's perspective, i.e., on the data captured by the robot's sensors.
The scenario underlying HARPER includes 15 actions, of which 10 involve physical contact between the robot and users.
arXiv Detail & Related papers (2024-03-21T14:53:50Z) - Cloth2Body: Generating 3D Human Body Mesh from 2D Clothing [54.29207348918216]
Cloth2Body needs to address new and emerging challenges raised by the partial observation of the input and the high diversity of the output.
We propose an end-to-end framework that can accurately estimate 3D body mesh parameterized by pose and shape from a 2D clothing image.
As shown by experimental results, the proposed framework achieves state-of-the-art performance and can effectively recover natural and diverse 3D body meshes from 2D images.
arXiv Detail & Related papers (2023-09-28T06:18:38Z) - DensePose From WiFi [86.61881052177228]
We develop a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions.
Our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches.
arXiv Detail & Related papers (2022-12-31T16:48:43Z) - 3D Human Mesh Construction Leveraging Wi-Fi [6.157977673335047]
Wi-Mesh is a vision-based 3D human mesh construction system.
System uses WiFi to visualize the shape and deformations of the human body.
arXiv Detail & Related papers (2022-10-20T01:58:27Z) - Robust Person Identification: A WiFi Vision-based Approach [7.80781386916681]
We propose a WiFi vision-based system, 3D-ID, for person Re-ID in 3D space.
Our system leverages the advances of WiFi and deep learning to help WiFi devices see, identify, and recognize people.
arXiv Detail & Related papers (2022-09-30T22:54:30Z) - Semi-Perspective Decoupled Heatmaps for 3D Robot Pose Estimation from
Depth Maps [66.24554680709417]
Knowing the exact 3D location of workers and robots in a collaborative environment enables several real applications.
We propose a non-invasive framework based on depth devices and deep neural networks to estimate the 3D pose of robots from an external camera.
arXiv Detail & Related papers (2022-07-06T08:52:12Z) - 3D Human Pose Estimation for Free-form Activity Using WiFi Signals [5.2245900672091]
Winect is a 3D human pose tracking system for free-form activity using commodity WiFi devices.
Our system tracks free-form activity by estimating a 3D skeleton pose that consists of a set of joints of the human body.
arXiv Detail & Related papers (2021-10-15T18:47:16Z) - Human POSEitioning System (HPS): 3D Human Pose Estimation and
Self-localization in Large Scenes from Body-Mounted Sensors [71.29186299435423]
We introduce (HPS) Human POSEitioning System, a method to recover the full 3D pose of a human registered with a 3D scan of the surrounding environment.
We show that our optimization-based integration exploits the benefits of the two, resulting in pose accuracy free of drift.
HPS could be used for VR/AR applications where humans interact with the scene without requiring direct line of sight with an external camera.
arXiv Detail & Related papers (2021-03-31T17:58:31Z) - From Point to Space: 3D Moving Human Pose Estimation Using Commodity
WiFi [21.30069619479767]
We present Wi-Mose, the first 3D moving human pose estimation system using commodity WiFi.
We fuse the amplitude and phase into Channel State Information (CSI) images which can provide both pose and position information.
Experimental results show that Wi-Mose can localize key-point with 29.7mm and 37.8mm Procrustes analysis Mean Per Joint Position Error (P-MPJPE) in the Line of Sight (LoS) and Non-Line of Sight (NLoS) scenarios.
arXiv Detail & Related papers (2020-12-28T02:27:26Z) - Perceiving Humans: from Monocular 3D Localization to Social Distancing [93.03056743850141]
We present a new cost-effective vision-based method that perceives humans' locations in 3D and their body orientation from a single image.
We show that it is possible to rethink the concept of "social distancing" as a form of social interaction in contrast to a simple location-based rule.
arXiv Detail & Related papers (2020-09-01T10:12:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.