DensePose From WiFi
- URL: http://arxiv.org/abs/2301.00250v1
- Date: Sat, 31 Dec 2022 16:48:43 GMT
- Title: DensePose From WiFi
- Authors: Jiaqi Geng, Dong Huang, Fernando De la Torre
- Abstract summary: We develop a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions.
Our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches.
- Score: 86.61881052177228
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advances in computer vision and machine learning techniques have led to
significant development in 2D and 3D human pose estimation from RGB cameras,
LiDAR, and radars. However, human pose estimation from images is adversely
affected by occlusion and lighting, which are common in many scenarios of
interest. Radar and LiDAR technologies, on the other hand, need specialized
hardware that is expensive and power-intensive. Furthermore, placing these
sensors in non-public areas raises significant privacy concerns. To address
these limitations, recent research has explored the use of WiFi antennas (1D
sensors) for body segmentation and key-point body detection. This paper further
expands on the use of the WiFi signal in combination with deep learning
architectures, commonly used in computer vision, to estimate dense human pose
correspondence. We developed a deep neural network that maps the phase and
amplitude of WiFi signals to UV coordinates within 24 human regions. The
results of the study reveal that our model can estimate the dense pose of
multiple subjects, with comparable performance to image-based approaches, by
utilizing WiFi signals as the only input. This paves the way for low-cost,
broadly accessible, and privacy-preserving algorithms for human sensing.
Related papers
- Neuro-Symbolic Fusion of Wi-Fi Sensing Data for Passive Radar with Inter-Modal Knowledge Transfer [10.388561519507471]
This paper introduces DeepProbHAR, a neuro-symbolic architecture for Wi-Fi sensing.
It provides initial evidence that Wi-Fi signals can differentiate between simple movements, such as leg or arm movements.
DeepProbHAR achieves results comparable to the state-of-the-art in human activity recognition.
arXiv Detail & Related papers (2024-07-01T08:43:27Z) - Cross Vision-RF Gait Re-identification with Low-cost RGB-D Cameras and
mmWave Radars [15.662787088335618]
This work studies the problem of cross-modal human re-identification (ReID)
We propose the first-of-its-kind vision-RF system for cross-modal multi-person ReID at the same time.
Our proposed system is able to achieve 92.5% top-1 accuracy and 97.5% top-5 accuracy out of 56 volunteers.
arXiv Detail & Related papers (2022-07-16T10:34:25Z) - Semi-Perspective Decoupled Heatmaps for 3D Robot Pose Estimation from
Depth Maps [66.24554680709417]
Knowing the exact 3D location of workers and robots in a collaborative environment enables several real applications.
We propose a non-invasive framework based on depth devices and deep neural networks to estimate the 3D pose of robots from an external camera.
arXiv Detail & Related papers (2022-07-06T08:52:12Z) - WiFi-based Spatiotemporal Human Action Perception [53.41825941088989]
An end-to-end WiFi signal neural network (SNN) is proposed to enable WiFi-only sensing in both line-of-sight and non-line-of-sight scenarios.
Especially, the 3D convolution module is able to explore thetemporal continuity of WiFi signals, and the feature self-attention module can explicitly maintain dominant features.
arXiv Detail & Related papers (2022-06-20T16:03:45Z) - A Wireless-Vision Dataset for Privacy Preserving Human Activity
Recognition [53.41825941088989]
A new WiFi-based and video-based neural network (WiNN) is proposed to improve the robustness of activity recognition.
Our results show that WiVi data set satisfies the primary demand and all three branches in the proposed pipeline keep more than $80%$ of activity recognition accuracy.
arXiv Detail & Related papers (2022-05-24T10:49:11Z) - Analyzing General-Purpose Deep-Learning Detection and Segmentation
Models with Images from a Lidar as a Camera Sensor [0.06554326244334865]
This work explores the potential of general-purpose DL perception algorithms for processing image-like outputs of advanced lidar sensors.
Rather than processing the three-dimensional point cloud data, this is, to the best of our knowledge, the first work to focus on low-resolution images with 360text field of view.
We show that with adequate preprocessing, general-purpose DL models can process these images, opening the door to their usage in environmental conditions.
arXiv Detail & Related papers (2022-03-08T13:14:43Z) - Domain and Modality Gaps for LiDAR-based Person Detection on Mobile
Robots [91.01747068273666]
This paper studies existing LiDAR-based person detectors with a particular focus on mobile robot scenarios.
Experiments revolve around the domain gap between driving and mobile robot scenarios, as well as the modality gap between 3D and 2D LiDAR sensors.
Results provide practical insights into LiDAR-based person detection and facilitate informed decisions for relevant mobile robot designs and applications.
arXiv Detail & Related papers (2021-06-21T16:35:49Z) - All-Weather Object Recognition Using Radar and Infrared Sensing [1.7513645771137178]
This thesis explores new sensing developments based on long wave polarised infrared (IR) imagery and imaging radar to recognise objects.
First, we developed a methodology based on Stokes parameters using polarised infrared data to recognise vehicles using deep neural networks.
Second, we explored the potential of using only the power spectrum captured by low-THz radar sensors to perform object recognition in a controlled scenario.
Last, we created a new large-scale dataset in the "wild" with many different weather scenarios showing radar robustness to detect vehicles in adverse weather.
arXiv Detail & Related papers (2020-10-30T14:16:39Z) - Perceiving Humans: from Monocular 3D Localization to Social Distancing [93.03056743850141]
We present a new cost-effective vision-based method that perceives humans' locations in 3D and their body orientation from a single image.
We show that it is possible to rethink the concept of "social distancing" as a form of social interaction in contrast to a simple location-based rule.
arXiv Detail & Related papers (2020-09-01T10:12:30Z) - A Deep Learning-based Radar and Camera Sensor Fusion Architecture for
Object Detection [0.0]
This research aims to enhance current 2D object detection networks by fusing camera data and projected sparse radar data in the network layers.
The proposed CameraRadarFusionNet (CRF-Net) automatically learns at which level the fusion of the sensor data is most beneficial for the detection result.
BlackIn, a training strategy inspired by Dropout, focuses the learning on a specific sensor type.
arXiv Detail & Related papers (2020-05-15T09:28:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.