A Wireless-Vision Dataset for Privacy Preserving Human Activity
Recognition
- URL: http://arxiv.org/abs/2205.11962v1
- Date: Tue, 24 May 2022 10:49:11 GMT
- Title: A Wireless-Vision Dataset for Privacy Preserving Human Activity
Recognition
- Authors: Yanling Hao, Zhiyuan Shi, Yuanwei Liu
- Abstract summary: A new WiFi-based and video-based neural network (WiNN) is proposed to improve the robustness of activity recognition.
Our results show that WiVi data set satisfies the primary demand and all three branches in the proposed pipeline keep more than $80%$ of activity recognition accuracy.
- Score: 53.41825941088989
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human Activity Recognition (HAR) has recently received remarkable attention
in numerous applications such as assisted living and remote monitoring.
Existing solutions based on sensors and vision technologies have obtained
achievements but still suffering from considerable limitations in the
environmental requirement. Wireless signals like WiFi-based sensing have
emerged as a new paradigm since it is convenient and not restricted in the
environment. In this paper, a new WiFi-based and video-based neural network
(WiNN) is proposed to improve the robustness of activity recognition where the
synchronized video serves as the supplement for the wireless data. Moreover, a
wireless-vision benchmark (WiVi) is collected for 9 class actions recognition
in three different visual conditions, including the scenes without occlusion,
with partial occlusion, and with full occlusion. Both machine learning methods
- support vector machine (SVM) as well as deep learning methods are used for
the accuracy verification of the data set. Our results show that WiVi data set
satisfies the primary demand and all three branches in the proposed pipeline
keep more than $80\%$ of activity recognition accuracy over multiple action
segmentation from 1s to 3s. In particular, WiNN is the most robust method in
terms of all the actions on three action segmentation compared to the others.
Related papers
- MaskFi: Unsupervised Learning of WiFi and Vision Representations for
Multimodal Human Activity Recognition [32.89577715124546]
We propose a novel unsupervised multimodal HAR solution, MaskFi, that leverages only unlabeled video and WiFi activity data for model training.
Benefiting from our unsupervised learning procedure, the network requires only a small amount of annotated data for finetuning and can adapt to the new environment with better performance.
arXiv Detail & Related papers (2024-02-29T15:27:55Z) - SAWEC: Sensing-Assisted Wireless Edge Computing [7.115682353265054]
We propose a novel Sensing-Assisted Wireless Edge Computing (SAWEC) paradigm to address this issue.
We leverage wireless sensing techniques to estimate the location of objects in the environment and obtain insights about the environment dynamics.
Experimental results show that SAWEC reduces both the channel occupation and end-to-end latency by more than 90%.
arXiv Detail & Related papers (2024-02-15T15:39:46Z) - Physical-Layer Semantic-Aware Network for Zero-Shot Wireless Sensing [74.12670841657038]
Device-free wireless sensing has recently attracted significant interest due to its potential to support a wide range of immersive human-machine interactive applications.
Data heterogeneity in wireless signals and data privacy regulation of distributed sensing have been considered as the major challenges that hinder the wide applications of wireless sensing in large area networking systems.
We propose a novel zero-shot wireless sensing solution that allows models constructed in one or a limited number of locations to be directly transferred to other locations without any labeled data.
arXiv Detail & Related papers (2023-12-08T13:50:30Z) - Contactless Human Activity Recognition using Deep Learning with Flexible
and Scalable Software Define Radio [1.3106429146573144]
This study investigates the use of Wi-Fi channel state information (CSI) as a novel method of ambient sensing.
These methods avoid additional costly hardware required for vision-based systems, which are privacy-intrusive.
This study presents a Wi-Fi CSI-based HAR system that assesses and contrasts deep learning approaches.
arXiv Detail & Related papers (2023-04-18T10:20:14Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - WiFi-based Spatiotemporal Human Action Perception [53.41825941088989]
An end-to-end WiFi signal neural network (SNN) is proposed to enable WiFi-only sensing in both line-of-sight and non-line-of-sight scenarios.
Especially, the 3D convolution module is able to explore thetemporal continuity of WiFi signals, and the feature self-attention module can explicitly maintain dominant features.
arXiv Detail & Related papers (2022-06-20T16:03:45Z) - GraSens: A Gabor Residual Anti-aliasing Sensing Framework for Action
Recognition using WiFi [52.530330427538885]
WiFi-based human action recognition (HAR) has been regarded as a promising solution in applications such as smart living and remote monitoring.
We propose an end-to-end Gabor residual anti-aliasing sensing network (GraSens) to directly recognize the actions using the WiFi signals from the wireless devices in diverse scenarios.
arXiv Detail & Related papers (2022-05-24T10:20:16Z) - Learnable Online Graph Representations for 3D Multi-Object Tracking [156.58876381318402]
We propose a unified and learning based approach to the 3D MOT problem.
We employ a Neural Message Passing network for data association that is fully trainable.
We show the merit of the proposed approach on the publicly available nuScenes dataset by achieving state-of-the-art performance of 65.6% AMOTA and 58% fewer ID-switches.
arXiv Detail & Related papers (2021-04-23T17:59:28Z) - YOLOpeds: Efficient Real-Time Single-Shot Pedestrian Detection for Smart
Camera Applications [2.588973722689844]
This work addresses the challenge of achieving a good trade-off between accuracy and speed for efficient deployment of deep-learning-based pedestrian detection in smart camera applications.
A computationally efficient architecture is introduced based on separable convolutions and proposes integrating dense connections across layers and multi-scale feature fusion.
Overall, YOLOpeds provides real-time sustained operation of over 30 frames per second with detection rates in the range of 86% outperforming existing deep learning models.
arXiv Detail & Related papers (2020-07-27T09:50:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.