Private Federated Learning In Real World Application -- A Case Study
- URL: http://arxiv.org/abs/2502.04565v2
- Date: Mon, 10 Feb 2025 18:28:24 GMT
- Title: Private Federated Learning In Real World Application -- A Case Study
- Authors: An Ji, Bortik Bandyopadhyay, Congzheng Song, Natarajan Krishnaswami, Prabal Vashisht, Rigel Smiroldo, Isabel Litton, Sayantan Mahinder, Mona Chitnis, Andrew W Hill,
- Abstract summary: This paper presents an implementation of machine learning model training using private federated learning (PFL) on edge devices.
We introduce a novel framework that uses PFL to address the challenge of training a model using users' private data.
The framework ensures that user data remain on individual devices, with only essential model updates transmitted to a central server for aggregation with privacy guarantees.
- Score: 15.877427073033184
- License:
- Abstract: This paper presents an implementation of machine learning model training using private federated learning (PFL) on edge devices. We introduce a novel framework that uses PFL to address the challenge of training a model using users' private data. The framework ensures that user data remain on individual devices, with only essential model updates transmitted to a central server for aggregation with privacy guarantees. We detail the architecture of our app selection model, which incorporates a neural network with attention mechanisms and ambiguity handling through uncertainty management. Experiments conducted through off-line simulations and on device training demonstrate the feasibility of our approach in real-world scenarios. Our results show the potential of PFL to improve the accuracy of an app selection model by adapting to changes in user behavior over time, while adhering to privacy standards. The insights gained from this study are important for industries looking to implement PFL, offering a robust strategy for training a predictive model directly on edge devices while ensuring user data privacy.
Related papers
- Optimal Strategies for Federated Learning Maintaining Client Privacy [8.518748080337838]
This paper studies the tradeoff between model performance and communication of the Federated Learning system.
We show that training for one local epoch per global round of training gives optimal performance while preserving the same privacy budget.
arXiv Detail & Related papers (2025-01-24T12:34:38Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - PRIOR: Personalized Prior for Reactivating the Information Overlooked in
Federated Learning [16.344719695572586]
We propose a novel scheme to inject personalized prior knowledge into a global model in each client.
At the heart of our proposed approach is a framework, the PFL with Bregman Divergence (pFedBreD)
Our method reaches the state-of-the-art performances on 5 datasets and outperforms other methods by up to 3.5% across 8 benchmarks.
arXiv Detail & Related papers (2023-10-13T15:21:25Z) - Privacy Side Channels in Machine Learning Systems [87.53240071195168]
We introduce privacy side channels: attacks that exploit system-level components to extract private information.
For example, we show that deduplicating training data before applying differentially-private training creates a side-channel that completely invalidates any provable privacy guarantees.
We further show that systems which block language models from regenerating training data can be exploited to exfiltrate private keys contained in the training set.
arXiv Detail & Related papers (2023-09-11T16:49:05Z) - Can Public Large Language Models Help Private Cross-device Federated Learning? [58.05449579773249]
We study (differentially) private federated learning (FL) of language models.
Public data has been used to improve privacy-utility trade-offs for both large and small language models.
We propose a novel distribution matching algorithm with theoretical grounding to sample public data close to private data distribution.
arXiv Detail & Related papers (2023-05-20T07:55:58Z) - Federated Privacy-preserving Collaborative Filtering for On-Device Next
App Prediction [52.16923290335873]
We propose a novel SeqMF model to solve the problem of predicting the next app launch during mobile device usage.
We modify the structure of the classical matrix factorization model and update the training procedure to sequential learning.
One more ingredient of the proposed approach is a new privacy mechanism that guarantees the protection of the sent data from the users to the remote server.
arXiv Detail & Related papers (2023-02-05T10:29:57Z) - Split Federated Learning on Micro-controllers: A Keyword Spotting
Showcase [1.4794135558227681]
Federated Learning is proposed as a private learning scheme, using which users can locally train the model without collecting users' raw data to servers.
In this work, we implement a simply SFL framework on the Arduino board and verify its correctness on the Chinese digits audio dataset for keyword spotting application with over 90% accuracy.
On the English digits audio dataset, our SFL implementation achieves 13.89% higher accuracy compared to a state-of-the-art FL implementation.
arXiv Detail & Related papers (2022-10-04T23:42:45Z) - Federated Learning with Noisy User Feedback [26.798303045807508]
Federated learning (FL) has emerged as a method for training ML models on edge devices using sensitive user data.
We propose a strategy for training FL models using positive and negative user feedback.
We show that our method improves substantially over a self-training baseline, achieving performance closer to models trained with full supervision.
arXiv Detail & Related papers (2022-05-06T09:14:24Z) - Personalization Improves Privacy-Accuracy Tradeoffs in Federated
Optimization [57.98426940386627]
We show that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy.
We illustrate our theoretical results with experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-02-10T20:44:44Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Training Production Language Models without Memorizing User Data [7.004279935788177]
This paper presents the first consumer-scale next-word prediction (NWP) model trained with Federated Learning (FL)
We demonstrate the deployment of a differentially private mechanism for the training of a production neural network in FL.
arXiv Detail & Related papers (2020-09-21T17:12:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.