3D Human Mesh Construction Leveraging Wi-Fi
- URL: http://arxiv.org/abs/2210.10957v1
- Date: Thu, 20 Oct 2022 01:58:27 GMT
- Title: 3D Human Mesh Construction Leveraging Wi-Fi
- Authors: Yichao Wang and Jie Yang
- Abstract summary: Wi-Mesh is a vision-based 3D human mesh construction system.
System uses WiFi to visualize the shape and deformations of the human body.
- Score: 6.157977673335047
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present, Wi-Mesh, a WiFi vision-based 3D human mesh
construction system. Our system leverages the advances of WiFi to visualize the
shape and deformations of the human body for 3D mesh construction. In
particular, it leverages multiple transmitting and receiving antennas on WiFi
devices to estimate the two-dimensional angle of arrival (2D AoA) of the WiFi
signal reflections to enable WiFi devices to see the physical environment as we
humans do. It then extracts only the images of the human body from the physical
environment and leverages deep learning models to digitize the extracted human
body into a 3D mesh representation. Experimental evaluation under various
indoor environments shows that Wi-Mesh achieves an average vertices location
error of 2.81cm and joint position error of 2.4cm, which is comparable to the
systems that utilize specialized and dedicated hardware. The proposed system
has the advantage of re-using the WiFi devices that already exist in the
environment for potential mass adoption. It can also work in non-line of sight
(NLoS), poor lighting conditions, and baggy clothes, where the camera-based
systems do not work well.
Related papers
- Vision Reimagined: AI-Powered Breakthroughs in WiFi Indoor Imaging [4.236383297604285]
WiFi as an omnipresent signal is a promising candidate for carrying out passive imaging and synchronizing the up-to-date information to all connected devices.
This is the first research work to consider WiFi indoor imaging as a multi-modal image generation task that converts the measured WiFi power into a high-resolution indoor image.
Our proposed WiFi-GEN network achieves a shape reconstruction accuracy that is 275% of that achieved by physical model-based methods.
arXiv Detail & Related papers (2024-01-09T02:20:30Z) - Cloth2Body: Generating 3D Human Body Mesh from 2D Clothing [54.29207348918216]
Cloth2Body needs to address new and emerging challenges raised by the partial observation of the input and the high diversity of the output.
We propose an end-to-end framework that can accurately estimate 3D body mesh parameterized by pose and shape from a 2D clothing image.
As shown by experimental results, the proposed framework achieves state-of-the-art performance and can effectively recover natural and diverse 3D body meshes from 2D images.
arXiv Detail & Related papers (2023-09-28T06:18:38Z) - DensePose From WiFi [86.61881052177228]
We develop a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions.
Our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches.
arXiv Detail & Related papers (2022-12-31T16:48:43Z) - Robust Person Identification: A WiFi Vision-based Approach [7.80781386916681]
We propose a WiFi vision-based system, 3D-ID, for person Re-ID in 3D space.
Our system leverages the advances of WiFi and deep learning to help WiFi devices see, identify, and recognize people.
arXiv Detail & Related papers (2022-09-30T22:54:30Z) - WiFi-based Spatiotemporal Human Action Perception [53.41825941088989]
An end-to-end WiFi signal neural network (SNN) is proposed to enable WiFi-only sensing in both line-of-sight and non-line-of-sight scenarios.
Especially, the 3D convolution module is able to explore thetemporal continuity of WiFi signals, and the feature self-attention module can explicitly maintain dominant features.
arXiv Detail & Related papers (2022-06-20T16:03:45Z) - 3D Human Pose Estimation for Free-from and Moving Activities Using WiFi [7.80781386916681]
GoPose is a 3D skeleton-based human pose estimation system that uses WiFi devices at home.
Our system does not require a user to wear or carry any sensors and can reuse the WiFi devices that already exist in a home environment for mass adoption.
arXiv Detail & Related papers (2022-04-16T21:58:24Z) - Gait Recognition in the Wild with Dense 3D Representations and A
Benchmark [86.68648536257588]
Existing studies for gait recognition are dominated by 2D representations like the silhouette or skeleton of the human body in constrained scenes.
This paper aims to explore dense 3D representations for gait recognition in the wild.
We build the first large-scale 3D representation-based gait recognition dataset, named Gait3D.
arXiv Detail & Related papers (2022-04-06T03:54:06Z) - StereoPIFu: Depth Aware Clothed Human Digitization via Stereo Vision [54.920605385622274]
We propose StereoPIFu, which integrates the geometric constraints of stereo vision with implicit function representation of PIFu, to recover the 3D shape of the clothed human.
Compared with previous works, our StereoPIFu significantly improves the robustness, completeness, and accuracy of the clothed human reconstruction.
arXiv Detail & Related papers (2021-04-12T08:41:54Z) - From Point to Space: 3D Moving Human Pose Estimation Using Commodity
WiFi [21.30069619479767]
We present Wi-Mose, the first 3D moving human pose estimation system using commodity WiFi.
We fuse the amplitude and phase into Channel State Information (CSI) images which can provide both pose and position information.
Experimental results show that Wi-Mose can localize key-point with 29.7mm and 37.8mm Procrustes analysis Mean Per Joint Position Error (P-MPJPE) in the Line of Sight (LoS) and Non-Line of Sight (NLoS) scenarios.
arXiv Detail & Related papers (2020-12-28T02:27:26Z) - Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh
Recovery from a 2D Human Pose [70.23652933572647]
We propose a novel graph convolutional neural network (GraphCNN)-based system that estimates the 3D coordinates of human mesh vertices directly from the 2D human pose.
We show that our Pose2Mesh outperforms the previous 3D human pose and mesh estimation methods on various benchmark datasets.
arXiv Detail & Related papers (2020-08-20T16:01:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.