Autosen: improving automatic wifi human sensing through cross-modal
autoencoder
- URL: http://arxiv.org/abs/2401.05440v1
- Date: Mon, 8 Jan 2024 19:50:02 GMT
- Title: Autosen: improving automatic wifi human sensing through cross-modal
autoencoder
- Authors: Qian Gao, Yanling Hao, Yuanwei Liu
- Abstract summary: WiFi human sensing is highly regarded for its low-cost and privacy advantages in recognizing human activities.
Traditional cross-modal methods, aimed at enabling self-supervised learning without labeled data, struggle to extract meaningful features from amplitude-phase combinations.
We introduce AutoSen, an innovative automatic WiFi sensing solution that departs from conventional approaches.
- Score: 56.44764266426344
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: WiFi human sensing is highly regarded for its low-cost and privacy advantages
in recognizing human activities. However, its effectiveness is largely confined
to controlled, single-user, line-of-sight settings, limited by data collection
complexities and the scarcity of labeled datasets. Traditional cross-modal
methods, aimed at mitigating these limitations by enabling self-supervised
learning without labeled data, struggle to extract meaningful features from
amplitude-phase combinations. In response, we introduce AutoSen, an innovative
automatic WiFi sensing solution that departs from conventional approaches.
AutoSen establishes a direct link between amplitude and phase through automated
cross-modal autoencoder learning. This autoencoder efficiently extracts
valuable features from unlabeled CSI data, encompassing amplitude and phase
information while eliminating their respective unique noises. These features
are then leveraged for specific tasks using few-shot learning techniques.
AutoSen's performance is rigorously evaluated on a publicly accessible
benchmark dataset, demonstrating its exceptional capabilities in automatic WiFi
sensing through the extraction of comprehensive cross-modal features.
Related papers
- Semantic Communication for Cooperative Perception using HARQ [51.148203799109304]
We leverage an importance map to distill critical semantic information, introducing a cooperative perception semantic communication framework.
To counter the challenges posed by time-varying multipath fading, our approach incorporates the use of frequency-division multiplexing (OFDM) along with channel estimation and equalization strategies.
We introduce a novel semantic error detection method that is integrated with our semantic communication framework in the spirit of hybrid automatic repeated request (HARQ)
arXiv Detail & Related papers (2024-08-29T08:53:26Z) - AIDE: An Automatic Data Engine for Object Detection in Autonomous Driving [68.73885845181242]
We propose an Automatic Data Engine (AIDE) that automatically identifies issues, efficiently curates data, improves the model through auto-labeling, and verifies the model through generation of diverse scenarios.
We further establish a benchmark for open-world detection on AV datasets to comprehensively evaluate various learning paradigms, demonstrating our method's superior performance at a reduced cost.
arXiv Detail & Related papers (2024-03-26T04:27:56Z) - Physical-Layer Semantic-Aware Network for Zero-Shot Wireless Sensing [74.12670841657038]
Device-free wireless sensing has recently attracted significant interest due to its potential to support a wide range of immersive human-machine interactive applications.
Data heterogeneity in wireless signals and data privacy regulation of distributed sensing have been considered as the major challenges that hinder the wide applications of wireless sensing in large area networking systems.
We propose a novel zero-shot wireless sensing solution that allows models constructed in one or a limited number of locations to be directly transferred to other locations without any labeled data.
arXiv Detail & Related papers (2023-12-08T13:50:30Z) - AutoFed: Heterogeneity-Aware Federated Multimodal Learning for Robust
Autonomous Driving [15.486799633600423]
AutoFed is a framework to fully exploit multimodal sensory data on autonomous vehicles.
We propose a novel model leveraging pseudo-labeling to avoid mistakenly treating unlabeled objects as the background.
We also propose an autoencoder-based data imputation method to fill missing data modality.
arXiv Detail & Related papers (2023-02-17T01:31:53Z) - AutoFi: Towards Automatic WiFi Human Sensing via Geometric
Self-Supervised Learning [30.451116905056573]
We propose AutoFi, an automatic WiFi sensing model based on a novel geometric self-supervised learning algorithm.
The AutoFi fully utilizes unlabeled low-quality CSI samples that are captured randomly, and then transfers the knowledge to specific tasks defined by users.
arXiv Detail & Related papers (2022-04-12T04:55:17Z) - ITSA: An Information-Theoretic Approach to Automatic Shortcut Avoidance
and Domain Generalization in Stereo Matching Networks [14.306250516592305]
We show that learning of feature representations in stereo matching networks is heavily influenced by synthetic data artefacts.
We propose an Information-Theoretic Shortcut Avoidance(ITSA) approach to automatically restrict shortcut-related information from being encoded into the feature representations.
We show that using this method, state-of-the-art stereo matching networks that are trained purely on synthetic data can effectively generalize to challenging and previously unseen real data scenarios.
arXiv Detail & Related papers (2022-01-06T22:03:50Z) - Robust Semi-supervised Federated Learning for Images Automatic
Recognition in Internet of Drones [57.468730437381076]
We present a Semi-supervised Federated Learning (SSFL) framework for privacy-preserving UAV image recognition.
There are significant differences in the number, features, and distribution of local data collected by UAVs using different camera modules.
We propose an aggregation rule based on the frequency of the client's participation in training, namely the FedFreq aggregation rule.
arXiv Detail & Related papers (2022-01-03T16:49:33Z) - An Automated Machine Learning (AutoML) Method for Driving Distraction
Detection Based on Lane-Keeping Performance [2.3951613028271397]
This study proposes a domain-specific automated machine learning (AutoML) to self-learn the optimal models to detect distraction.
The proposed AutoGBM method is found to be reliable and promising to predict phone-related driving distractions.
The purposed AutoGBM not only produces better performance with fewer features; but also provides data-driven insights about system design.
arXiv Detail & Related papers (2021-03-10T12:37:18Z) - Universal Embeddings for Spatio-Temporal Tagging of Self-Driving Logs [72.67604044776662]
We tackle the problem of of-temporal tagging of self-driving scenes from raw sensor data.
Our approach learns a universal embedding for all tags, enabling efficient tagging of many attributes and faster learning of new attributes with limited data.
arXiv Detail & Related papers (2020-11-12T02:18:16Z) - Federated Self-Supervised Learning of Multi-Sensor Representations for
Embedded Intelligence [8.110949636804772]
Smartphones, wearables, and Internet of Things (IoT) devices produce a wealth of data that cannot be accumulated in a centralized repository for learning supervised models.
We propose a self-supervised approach termed textitscalogram-signal correspondence learning based on wavelet transform to learn useful representations from unlabeled sensor inputs.
We extensively assess the quality of learned features with our multi-view strategy on diverse public datasets, achieving strong performance in all domains.
arXiv Detail & Related papers (2020-07-25T21:59:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.