Context-Aware Predictive Coding: A Representation Learning Framework for WiFi Sensing
- URL: http://arxiv.org/abs/2410.01825v1
- Date: Mon, 16 Sep 2024 17:59:49 GMT
- Title: Context-Aware Predictive Coding: A Representation Learning Framework for WiFi Sensing
- Authors: B. Barahimi, H. Tabassum, M. Omer, O. Waqar,
- Abstract summary: WiFi sensing is an emerging technology that utilizes wireless signals for various sensing applications.
In this paper, we introduce a novel SSL framework called Context-Aware Predictive Coding (CAPC)
CAPC effectively learns from unlabelled data and adapts to diverse environments.
Our evaluations demonstrate that CAPC not only outperforms other SSL methods and supervised approaches, but also achieves superior generalization capabilities.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: WiFi sensing is an emerging technology that utilizes wireless signals for various sensing applications. However, the reliance on supervised learning, the scarcity of labelled data, and the incomprehensible channel state information (CSI) pose significant challenges. These issues affect deep learning models' performance and generalization across different environments. Consequently, self-supervised learning (SSL) is emerging as a promising strategy to extract meaningful data representations with minimal reliance on labelled samples. In this paper, we introduce a novel SSL framework called Context-Aware Predictive Coding (CAPC), which effectively learns from unlabelled data and adapts to diverse environments. CAPC integrates elements of Contrastive Predictive Coding (CPC) and the augmentation-based SSL method, Barlow Twins, promoting temporal and contextual consistency in data representations. This hybrid approach captures essential temporal information in CSI, crucial for tasks like human activity recognition (HAR), and ensures robustness against data distortions. Additionally, we propose a unique augmentation, employing both uplink and downlink CSI to isolate free space propagation effects and minimize the impact of electronic distortions of the transceiver. Our evaluations demonstrate that CAPC not only outperforms other SSL methods and supervised approaches, but also achieves superior generalization capabilities. Specifically, CAPC requires fewer labelled samples while significantly outperforming supervised learning and surpassing SSL baselines. Furthermore, our transfer learning studies on an unseen dataset with a different HAR task and environment showcase an accuracy improvement of 1.8 percent over other SSL baselines and 24.7 percent over supervised learning, emphasizing its exceptional cross-domain adaptability.
Related papers
- A Survey of the Self Supervised Learning Mechanisms for Vision Transformers [5.152455218955949]
The application of self supervised learning (SSL) in vision tasks has gained significant attention.
We develop a comprehensive taxonomy of systematically classifying the SSL techniques.
We discuss the motivations behind SSL, review popular pre-training tasks, and highlight the challenges and advancements in this field.
arXiv Detail & Related papers (2024-08-30T07:38:28Z) - Can We Break Free from Strong Data Augmentations in Self-Supervised Learning? [18.83003310612038]
Self-supervised learning (SSL) has emerged as a promising solution for addressing the challenge of limited labeled data in deep neural networks (DNNs)
We explore SSL behavior across a spectrum of augmentations, revealing their crucial role in shaping SSL model performance and learning mechanisms.
We propose a novel learning approach that integrates prior knowledge, with the aim of curtailing the need for extensive data augmentations.
arXiv Detail & Related papers (2024-04-15T12:53:48Z) - Boosting Transformer's Robustness and Efficacy in PPG Signal Artifact
Detection with Self-Supervised Learning [0.0]
This study addresses the underutilization of abundant unlabeled data by employing self-supervised learning (SSL) to extract latent features from this data.
Our experiments demonstrate that SSL significantly enhances the Transformer model's ability to learn representations.
This approach holds promise for broader applications in PICU environments, where annotated data is often limited.
arXiv Detail & Related papers (2024-01-02T04:00:48Z) - Self-Supervision for Tackling Unsupervised Anomaly Detection: Pitfalls
and Opportunities [50.231837687221685]
Self-supervised learning (SSL) has transformed machine learning and its many real world applications.
Unsupervised anomaly detection (AD) has also capitalized on SSL, by self-generating pseudo-anomalies.
arXiv Detail & Related papers (2023-08-28T07:55:01Z) - Augmentation-aware Self-supervised Learning with Conditioned Projector [6.720605329045581]
Self-supervised learning (SSL) is a powerful technique for learning from unlabeled data.
We propose to foster sensitivity to characteristics in the representation space by modifying the projector network.
Our approach, coined Conditional Augmentation-aware Self-supervised Learning (CASSLE), is directly applicable to typical joint-embedding SSL methods.
arXiv Detail & Related papers (2023-05-31T12:24:06Z) - Does Decentralized Learning with Non-IID Unlabeled Data Benefit from
Self Supervision? [51.00034621304361]
We study decentralized learning with unlabeled data through the lens of self-supervised learning (SSL)
We study the effectiveness of contrastive learning algorithms under decentralized learning settings.
arXiv Detail & Related papers (2022-10-20T01:32:41Z) - On Higher Adversarial Susceptibility of Contrastive Self-Supervised
Learning [104.00264962878956]
Contrastive self-supervised learning (CSL) has managed to match or surpass the performance of supervised learning in image and video classification.
It is still largely unknown if the nature of the representation induced by the two learning paradigms is similar.
We identify the uniform distribution of data representation over a unit hypersphere in the CSL representation space as the key contributor to this phenomenon.
We devise strategies that are simple, yet effective in improving model robustness with CSL training.
arXiv Detail & Related papers (2022-07-22T03:49:50Z) - Collaborative Intelligence Orchestration: Inconsistency-Based Fusion of
Semi-Supervised Learning and Active Learning [60.26659373318915]
Active learning (AL) and semi-supervised learning (SSL) are two effective, but often isolated, means to alleviate the data-hungry problem.
We propose an innovative Inconsistency-based virtual aDvErial algorithm to further investigate SSL-AL's potential superiority.
Two real-world case studies visualize the practical industrial value of applying and deploying the proposed data sampling algorithm.
arXiv Detail & Related papers (2022-06-07T13:28:43Z) - CCLF: A Contrastive-Curiosity-Driven Learning Framework for
Sample-Efficient Reinforcement Learning [56.20123080771364]
We develop a model-agnostic Contrastive-Curiosity-Driven Learning Framework (CCLF) for reinforcement learning.
CCLF fully exploit sample importance and improve learning efficiency in a self-supervised manner.
We evaluate this approach on the DeepMind Control Suite, Atari, and MiniGrid benchmarks.
arXiv Detail & Related papers (2022-05-02T14:42:05Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.