Private Data Leakage in Federated Human Activity Recognition for Wearable Healthcare Devices
- URL: http://arxiv.org/abs/2405.10979v2
- Date: Thu, 20 Jun 2024 08:51:05 GMT
- Title: Private Data Leakage in Federated Human Activity Recognition for Wearable Healthcare Devices
- Authors: Kongyang Chen, Dongping Zhang, Sijia Guan, Bing Mi, Jiaxing Shen, Guoqing Wang,
- Abstract summary: We investigate privacy leakage issues within federated user behavior recognition modeling across multiple wearable devices.
Our proposed system entails a federated learning architecture comprising $N$ wearable device users and a parameter server.
Experimentation conducted on five publicly available HAR datasets demonstrates an accuracy rate of 92% for malicious server-based membership inference.
- Score: 6.422056036165425
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Wearable data serves various health monitoring purposes, such as determining activity states based on user behavior and providing tailored exercise recommendations. However, the individual data perception and computational capabilities of wearable devices are limited, often necessitating the joint training of models across multiple devices. Federated Human Activity Recognition (HAR) presents a viable research avenue, allowing for global model training without the need to upload users' local activity data. Nonetheless, recent studies have revealed significant privacy concerns persisting within federated learning frameworks. To address this gap, we focus on investigating privacy leakage issues within federated user behavior recognition modeling across multiple wearable devices. Our proposed system entails a federated learning architecture comprising $N$ wearable device users and a parameter server, which may exhibit curiosity in extracting sensitive user information from model parameters. Consequently, we consider a membership inference attack based on a malicious server, leveraging differences in model generalization across client data. Experimentation conducted on five publicly available HAR datasets demonstrates an accuracy rate of 92\% for malicious server-based membership inference. Our study provides preliminary evidence of substantial privacy risks associated with federated training across multiple wearable devices, offering a novel research perspective within this domain.
Related papers
- Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - Warmup and Transfer Knowledge-Based Federated Learning Approach for IoT
Continuous Authentication [34.6454670154373]
We propose a novel Federated Learning (FL) approach that protects the anonymity of user data and maintains the security of his data.
Our experiments show a significant increase in user authentication accuracy while maintaining user privacy and data security.
arXiv Detail & Related papers (2022-11-10T15:51:04Z) - On the Privacy Effect of Data Enhancement via the Lens of Memorization [20.63044895680223]
We propose to investigate privacy from a new perspective called memorization.
Through the lens of memorization, we find that previously deployed MIAs produce misleading results as they are less likely to identify samples with higher privacy risks.
We demonstrate that the generalization gap and privacy leakage are less correlated than those of the previous results.
arXiv Detail & Related papers (2022-08-17T13:02:17Z) - Attribute Inference Attack of Speech Emotion Recognition in Federated
Learning Settings [56.93025161787725]
Federated learning (FL) is a distributed machine learning paradigm that coordinates clients to train a model collaboratively without sharing local data.
We propose an attribute inference attack framework that infers sensitive attribute information of the clients from shared gradients or model parameters.
We show that the attribute inference attack is achievable for SER systems trained using FL.
arXiv Detail & Related papers (2021-12-26T16:50:42Z) - Federated Test-Time Adaptive Face Presentation Attack Detection with
Dual-Phase Privacy Preservation [100.69458267888962]
Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline.
Due to legal and privacy issues, training data (real face images and spoof images) are not allowed to be directly shared between different data sources.
We propose a Federated Test-Time Adaptive Face Presentation Attack Detection with Dual-Phase Privacy Preservation framework.
arXiv Detail & Related papers (2021-10-25T02:51:05Z) - Opportunistic Federated Learning: An Exploration of Egocentric
Collaboration for Pervasive Computing Applications [20.61034787249924]
We define a new approach, opportunistic federated learning, in which individual devices belonging to different users seek to learn robust models.
In this paper, we explore the feasibility and limits of such an approach, culminating in a framework that supports encounter-based pairwise collaborative learning.
arXiv Detail & Related papers (2021-03-24T15:30:21Z) - Reliability Check via Weight Similarity in Privacy-Preserving
Multi-Party Machine Learning [7.552100672006174]
We focus on addressing the concerns of data privacy, model privacy, and data quality associated with multi-party machine learning.
We present a scheme for privacy-preserving collaborative learning that checks the participants' data quality while guaranteeing data and model privacy.
arXiv Detail & Related papers (2021-01-14T08:55:42Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Federated Learning with Heterogeneous Labels and Models for Mobile
Activity Monitoring [0.7106986689736827]
On-device Federated Learning proves to be an effective approach for distributed and collaborative machine learning.
We propose a framework for federated label-based aggregation, which leverages overlapping information gain across activities.
Empirical evaluation with the Heterogeneity Human Activity Recognition (HHAR) dataset on Raspberry Pi 2 indicates an average deterministic accuracy increase of at least 11.01%.
arXiv Detail & Related papers (2020-12-04T11:44:17Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.