maxVSTAR: Maximally Adaptive Vision-Guided CSI Sensing with Closed-Loop Edge Model Adaptation for Robust Human Activity Recognition
- URL: http://arxiv.org/abs/2510.26146v1
- Date: Thu, 30 Oct 2025 04:59:28 GMT
- Title: maxVSTAR: Maximally Adaptive Vision-Guided CSI Sensing with Closed-Loop Edge Model Adaptation for Robust Human Activity Recognition
- Authors: Kexing Liu,
- Abstract summary: maxVSTAR is a vision-guided model adaptation framework that mitigates domain shift for edge-deployed CSI sensing systems.<n>The proposed system integrates a cross-modal teacher-student architecture, where a high-accuracy YOLO-based vision model serves as a dynamic supervisory signal.<n> Extensive experiments demonstrate the effectiveness of maxVSTAR.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: WiFi Channel State Information (CSI)-based human activity recognition (HAR) provides a privacy-preserving, device-free sensing solution for smart environments. However, its deployment on edge devices is severely constrained by domain shift, where recognition performance deteriorates under varying environmental and hardware conditions. This study presents maxVSTAR (maximally adaptive Vision-guided Sensing Technology for Activity Recognition), a closed-loop, vision-guided model adaptation framework that autonomously mitigates domain shift for edge-deployed CSI sensing systems. The proposed system integrates a cross-modal teacher-student architecture, where a high-accuracy YOLO-based vision model serves as a dynamic supervisory signal, delivering real-time activity labels for the CSI data stream. These labels enable autonomous, online fine-tuning of a lightweight CSI-based HAR model, termed Sensing Technology for Activity Recognition (STAR), directly at the edge. This closed-loop retraining mechanism allows STAR to continuously adapt to environmental changes without manual intervention. Extensive experiments demonstrate the effectiveness of maxVSTAR. When deployed on uncalibrated hardware, the baseline STAR model's recognition accuracy declined from 93.52% to 49.14%. Following a single vision-guided adaptation cycle, maxVSTAR restored the accuracy to 81.51%. These results confirm the system's capacity for dynamic, self-supervised model adaptation in privacy-conscious IoT environments, establishing a scalable and practical paradigm for long-term autonomous HAR using CSI sensing at the network edge.
Related papers
- Adversary-Aware Private Inference over Wireless Channels [51.93574339176914]
AI-based sensing at wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.<n>As sensitive personal data can be reconstructed by an adversary, transformation of the features are required to reduce the risk of privacy violations.<n>We propose a novel framework for privacy-preserving AI-based sensing, where devices apply transformations of extracted features before transmission to a model server.
arXiv Detail & Related papers (2025-10-23T13:02:14Z) - AI/ML Life Cycle Management for Interoperable AI Native RAN [50.61227317567369]
Artificial intelligence (AI) and machine learning (ML) models are rapidly permeating the 5G Radio Access Network (RAN)<n>These developments lay the foundation for AI-native transceivers as a key enabler for 6G.
arXiv Detail & Related papers (2025-07-24T16:04:59Z) - RoHOI: Robustness Benchmark for Human-Object Interaction Detection [84.78366452133514]
Human-Object Interaction (HOI) detection is crucial for robot-human assistance, enabling context-aware support.<n>We introduce the first benchmark for HOI detection, evaluating model resilience under diverse challenges.<n>Our benchmark, RoHOI, includes 20 corruption types based on the HICO-DET and V-COCO datasets and a new robustness-focused metric.
arXiv Detail & Related papers (2025-07-12T01:58:04Z) - Automating Traffic Monitoring with SHM Sensor Networks via Vision-Supervised Deep Learning [0.18148496440453665]
Bridges, as critical components of civil infrastructure, are increasingly affected by deterioration.<n>Recent advances in deep learning have enabled progress toward continuous, automated monitoring.<n>We propose a fully automated deep-learning pipeline for continuous traffic monitoring using structural health monitoring (SHM) sensor networks.
arXiv Detail & Related papers (2025-06-23T18:27:14Z) - CSI-Bench: A Large-Scale In-the-Wild Dataset for Multi-task WiFi Sensing [13.709208651007167]
WiFi sensing has emerged as a compelling contactless modality for human activity monitoring by capturing fine-grained variations in Channel State Information (CSI)<n>Existing WiFi sensing systems struggle to generalize in real-world settings due to datasets collected in controlled environments with homogeneous hardware and fragmented, session-based recordings that fail to reflect daily activity.<n>We present CSI-Bench, a large-scale, in-the-wild benchmark dataset collected using commercial WiFi edge devices across 26 diverse indoor environments with 35 real users.
arXiv Detail & Related papers (2025-05-28T01:29:29Z) - WiFi based Human Fall and Activity Recognition using Transformer based Encoder Decoder and Graph Neural Networks [8.427757893042374]
Transformer based Decoder Network (TED Net) designed for estimating human skeleton poses from WiFi signals.<n>TED Net integrates convolutional encoders with transformer based attention mechanisms to capture skeleton features from CSI signals.
arXiv Detail & Related papers (2025-04-23T12:22:24Z) - Online hand gesture recognition using Continual Graph Transformers [1.3927943269211591]
We propose a novel online recognition system designed for real-time skeleton sequence streaming.<n>Our approach achieves state-of-the-art accuracy and significantly reduces false positive rates, making it a compelling solution for real-time applications.<n>The proposed system can be seamlessly integrated into various domains, including human-robot collaboration and assistive technologies.
arXiv Detail & Related papers (2025-02-20T17:27:55Z) - CODA: A COst-efficient Test-time Domain Adaptation Mechanism for HAR [25.606795179822885]
We propose CODA, a COst-efficient Domain Adaptation mechanism for mobile sensing.
CODA addresses real-time drifts from the data distribution perspective with active learning theory.
We demonstrate the feasibility and potential of online adaptation with CODA.
arXiv Detail & Related papers (2024-03-22T02:50:42Z) - Physical-Layer Semantic-Aware Network for Zero-Shot Wireless Sensing [74.12670841657038]
Device-free wireless sensing has recently attracted significant interest due to its potential to support a wide range of immersive human-machine interactive applications.
Data heterogeneity in wireless signals and data privacy regulation of distributed sensing have been considered as the major challenges that hinder the wide applications of wireless sensing in large area networking systems.
We propose a novel zero-shot wireless sensing solution that allows models constructed in one or a limited number of locations to be directly transferred to other locations without any labeled data.
arXiv Detail & Related papers (2023-12-08T13:50:30Z) - A Wireless-Vision Dataset for Privacy Preserving Human Activity
Recognition [53.41825941088989]
A new WiFi-based and video-based neural network (WiNN) is proposed to improve the robustness of activity recognition.
Our results show that WiVi data set satisfies the primary demand and all three branches in the proposed pipeline keep more than $80%$ of activity recognition accuracy.
arXiv Detail & Related papers (2022-05-24T10:49:11Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.