Self-Supervised WiFi-Based Activity Recognition
- URL: http://arxiv.org/abs/2104.09072v1
- Date: Mon, 19 Apr 2021 06:40:21 GMT
- Title: Self-Supervised WiFi-Based Activity Recognition
- Authors: Hok-Shing Lau, Ryan McConville, Mohammud J. Bocus, Robert J.
Piechocki, Raul Santos-Rodriguez
- Abstract summary: We extract fine-grained physical layer information from WiFi devices for passive activity recognition in indoor environments.
We propose the use of self-supervised contrastive learning to improve activity recognition performance.
We observe a 17.7% increase in macro averaged F1 score on the task of WiFi based activity recognition.
- Score: 3.4473723375416188
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional approaches to activity recognition involve the use of wearable
sensors or cameras in order to recognise human activities. In this work, we
extract fine-grained physical layer information from WiFi devices for the
purpose of passive activity recognition in indoor environments. While such data
is ubiquitous, few approaches are designed to utilise large amounts of
unlabelled WiFi data. We propose the use of self-supervised contrastive
learning to improve activity recognition performance when using multiple views
of the transmitted WiFi signal captured by different synchronised receivers. We
conduct experiments where the transmitters and receivers are arranged in
different physical layouts so as to cover both Line-of-Sight (LoS) and non LoS
(NLoS) conditions. We compare the proposed contrastive learning system with
non-contrastive systems and observe a 17.7% increase in macro averaged F1 score
on the task of WiFi based activity recognition, as well as significant
improvements in one- and few-shot learning scenarios.
Related papers
- Neuro-Symbolic Fusion of Wi-Fi Sensing Data for Passive Radar with Inter-Modal Knowledge Transfer [10.388561519507471]
This paper introduces DeepProbHAR, a neuro-symbolic architecture for Wi-Fi sensing.
It provides initial evidence that Wi-Fi signals can differentiate between simple movements, such as leg or arm movements.
DeepProbHAR achieves results comparable to the state-of-the-art in human activity recognition.
arXiv Detail & Related papers (2024-07-01T08:43:27Z) - Accurate Passive Radar via an Uncertainty-Aware Fusion of Wi-Fi Sensing Data [12.511211994847173]
Wi-Fi devices can effectively be used as passive radar systems that sense what happens in the surroundings and can even discern human activity.
We propose a principled architecture which employs Variational Auto-Encoders for estimating a latent distribution responsible for generating the data.
We verify that the fused data processed by different antennas of the same Wi-Fi receiver results in increased accuracy of human activity recognition.
arXiv Detail & Related papers (2024-07-01T08:26:15Z) - MaskFi: Unsupervised Learning of WiFi and Vision Representations for
Multimodal Human Activity Recognition [32.89577715124546]
We propose a novel unsupervised multimodal HAR solution, MaskFi, that leverages only unlabeled video and WiFi activity data for model training.
Benefiting from our unsupervised learning procedure, the network requires only a small amount of annotated data for finetuning and can adapt to the new environment with better performance.
arXiv Detail & Related papers (2024-02-29T15:27:55Z) - GaitFi: Robust Device-Free Human Identification via WiFi and Vision
Multimodal Learning [33.89340087471202]
We propose a novel multimodal gait recognition method, namely GaitFi, which leverages WiFi signals and videos for human identification.
In GaitFi, Channel State Information (CSI) that reflects the multi-path propagation of WiFi is collected to capture human gaits, while videos are captured by cameras.
To learn robust gait information, we propose a Lightweight Residual Convolution Network (LRCN) as the backbone network, and further propose the two-stream GaitFi.
Experiments are conducted in the real world, which demonstrates that the GaitFi outperforms state-of-the-art gait recognition
arXiv Detail & Related papers (2022-08-30T15:07:43Z) - A Wireless-Vision Dataset for Privacy Preserving Human Activity
Recognition [53.41825941088989]
A new WiFi-based and video-based neural network (WiNN) is proposed to improve the robustness of activity recognition.
Our results show that WiVi data set satisfies the primary demand and all three branches in the proposed pipeline keep more than $80%$ of activity recognition accuracy.
arXiv Detail & Related papers (2022-05-24T10:49:11Z) - GraSens: A Gabor Residual Anti-aliasing Sensing Framework for Action
Recognition using WiFi [52.530330427538885]
WiFi-based human action recognition (HAR) has been regarded as a promising solution in applications such as smart living and remote monitoring.
We propose an end-to-end Gabor residual anti-aliasing sensing network (GraSens) to directly recognize the actions using the WiFi signals from the wireless devices in diverse scenarios.
arXiv Detail & Related papers (2022-05-24T10:20:16Z) - Mobile Behavioral Biometrics for Passive Authentication [65.94403066225384]
This work carries out a comparative analysis of unimodal and multimodal behavioral biometric traits.
Experiments are performed over HuMIdb, one of the largest and most comprehensive freely available mobile user interaction databases.
In our experiments, the most discriminative background sensor is the magnetometer, whereas among touch tasks the best results are achieved with keystroke.
arXiv Detail & Related papers (2022-03-14T17:05:59Z) - Moving Object Classification with a Sub-6 GHz Massive MIMO Array using
Real Data [64.48836187884325]
Classification between different activities in an indoor environment using wireless signals is an emerging technology for various applications.
In this paper, we analyze classification of moving objects by employing machine learning on real data from a massive multi-input-multi-output (MIMO) system in an indoor environment.
arXiv Detail & Related papers (2021-02-09T15:48:35Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - Vision Meets Wireless Positioning: Effective Person Re-identification
with Recurrent Context Propagation [120.18969251405485]
Existing person re-identification methods rely on the visual sensor to capture the pedestrians.
Mobile phone can be sensed by WiFi and cellular networks in the form of a wireless positioning signal.
We propose a novel recurrent context propagation module that enables information to propagate between visual data and wireless positioning data.
arXiv Detail & Related papers (2020-08-10T14:19:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.