Local Differential Privacy for Federated Learning with Fixed Memory Usage and Per-Client Privacy
- URL: http://arxiv.org/abs/2510.12908v1
- Date: Tue, 14 Oct 2025 18:32:08 GMT
- Title: Local Differential Privacy for Federated Learning with Fixed Memory Usage and Per-Client Privacy
- Authors: Rouzbeh Behnia, Jeremiah Birrell, Arman Riasi, Reza Ebrahimi, Kaushik Dutta, Thang Hoang,
- Abstract summary: Local differential privacy (LDP) offers strong protection by letting each participant privatize updates before transmission.<n>These issues undermine model generalizability, fairness, and compliance with regulations such as HIPAA and Federated learning.<n>We propose L-RDP, a DP method designed for LDP that ensures constant, lower memory usage to reduce dropouts and provides rigorous per-client privacy guarantees.
- Score: 10.651246049060166
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) enables organizations to collaboratively train models without sharing their datasets. Despite this advantage, recent studies show that both client updates and the global model can leak private information, limiting adoption in sensitive domains such as healthcare. Local differential privacy (LDP) offers strong protection by letting each participant privatize updates before transmission. However, existing LDP methods were designed for centralized training and introduce challenges in FL, including high resource demands that can cause client dropouts and the lack of reliable privacy guarantees under asynchronous participation. These issues undermine model generalizability, fairness, and compliance with regulations such as HIPAA and GDPR. To address them, we propose L-RDP, a DP method designed for LDP that ensures constant, lower memory usage to reduce dropouts and provides rigorous per-client privacy guarantees by accounting for intermittent participation.
Related papers
- First Provable Guarantees for Practical Private FL: Beyond Restrictive Assumptions [52.82254388526969]
Fed-$$-NormEC is the first differentially private FL framework providing provable convergence and DP guarantees under standard assumptions.<n>Fed-$$-NormE integrates local updates, separate server and client stepsizes, and, crucially, partial client participation.
arXiv Detail & Related papers (2025-12-25T06:05:15Z) - Can Federated Learning Safeguard Private Data in LLM Training? Vulnerabilities, Attacks, and Defense Evaluation [20.37072541084284]
Federated learning (FL) enables clients to retain local data while sharing only model parameters for collaborative training.<n>We show that attackers can still extract training data from the global model, even using straightforward generation methods.<n>We introduce an enhanced attack strategy tailored to FL, which tracks global model updates during training to intensify privacy leakage.
arXiv Detail & Related papers (2025-09-25T02:28:08Z) - Theoretically Unmasking Inference Attacks Against LDP-Protected Clients in Federated Vision Models [22.023648710005734]
Federated learning enables collaborative learning among clients via a coordinating server while avoiding direct data sharing.<n>Recent studies on Membership Inference Attacks (MIAs) have challenged this notion, showing high success rates against unprotected training data.<n>We derive theoretical lower bounds for the success rates of low-polynomial time MIAs that exploit vulnerabilities in fully connected or self-attention layers.
arXiv Detail & Related papers (2025-06-16T21:48:11Z) - DP-RTFL: Differentially Private Resilient Temporal Federated Learning for Trustworthy AI in Regulated Industries [0.0]
This paper introduces Differentially Private Resilient Temporal Federated Learning (DP-RTFL)<n>It is designed to ensure training continuity, precise state recovery, and strong data privacy.<n>The framework is particularly suited for critical applications like credit risk assessment using sensitive financial data.
arXiv Detail & Related papers (2025-05-27T16:30:25Z) - FedRE: Robust and Effective Federated Learning with Privacy Preference [20.969342596181246]
Federated Learning (FL) employs gradient aggregation at the server for distributed training to prevent the privacy leakage of raw data.<n>Private information can still be divulged through the analysis of uploaded gradients from clients.<n>Existing methods fail to take practical issues into account by merely perturbing each sample with the same mechanism.
arXiv Detail & Related papers (2025-05-08T01:50:27Z) - Privacy-Preserving Federated Embedding Learning for Localized Retrieval-Augmented Generation [60.81109086640437]
We propose a novel framework called Federated Retrieval-Augmented Generation (FedE4RAG)<n>FedE4RAG facilitates collaborative training of client-side RAG retrieval models.<n>We apply homomorphic encryption within federated learning to safeguard model parameters.
arXiv Detail & Related papers (2025-04-27T04:26:02Z) - Privacy-Preserving Customer Support: A Framework for Secure and Scalable Interactions [0.0]
This paper introduces the Privacy-Preserving Zero-Shot Learning (PP-ZSL) framework, a novel approach leveraging large language models (LLMs) in a zero-shot learning mode.<n>Unlike conventional machine learning methods, PP-ZSL eliminates the need for local training on sensitive data by utilizing pre-trained LLMs to generate responses directly.<n>The framework incorporates real-time data anonymization to redact or mask sensitive information, retrieval-augmented generation (RAG) for domain-specific query resolution, and robust post-processing to ensure compliance with regulatory standards.
arXiv Detail & Related papers (2024-12-10T17:20:47Z) - Enhancing Feature-Specific Data Protection via Bayesian Coordinate Differential Privacy [55.357715095623554]
Local Differential Privacy (LDP) offers strong privacy guarantees without requiring users to trust external parties.
We propose a Bayesian framework, Bayesian Coordinate Differential Privacy (BCDP), that enables feature-specific privacy quantification.
arXiv Detail & Related papers (2024-10-24T03:39:55Z) - Convergent Differential Privacy Analysis for General Federated Learning: the $f$-DP Perspective [57.35402286842029]
Federated learning (FL) is an efficient collaborative training paradigm with a focus on local privacy.
differential privacy (DP) is a classical approach to capture and ensure the reliability of private protections.
arXiv Detail & Related papers (2024-08-28T08:22:21Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - Privacy-Preserving Joint Edge Association and Power Optimization for the
Internet of Vehicles via Federated Multi-Agent Reinforcement Learning [74.53077322713548]
We investigate the privacy-preserving joint edge association and power allocation problem.
The proposed solution strikes a compelling trade-off, while preserving a higher privacy level than the state-of-the-art solutions.
arXiv Detail & Related papers (2023-01-26T10:09:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.