Tackling Privacy Heterogeneity in Differentially Private Federated Learning
- URL: http://arxiv.org/abs/2602.22633v1
- Date: Thu, 26 Feb 2026 05:20:37 GMT
- Title: Tackling Privacy Heterogeneity in Differentially Private Federated Learning
- Authors: Ruichen Xu, Ying-Jun Angela Zhang, Jianwei Huang,
- Abstract summary: We present the first systematic study of privacy-aware client selection in Differentially private federated learning (DP-FL)<n>We propose a privacy-aware client selection strategy, formulated as a convex optimization problem, that adaptively adjusts selection probabilities to minimize training error.<n>Our approach achieves up to a 10% improvement in test accuracy on benchmark datasets.
- Score: 33.2985262258717
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differentially private federated learning (DP-FL) enables clients to collaboratively train machine learning models while preserving the privacy of their local data. However, most existing DP-FL approaches assume that all clients share a uniform privacy budget, an assumption that does not hold in real-world scenarios where privacy requirements vary widely. This privacy heterogeneity poses a significant challenge: conventional client selection strategies, which typically rely on data quantity, cannot distinguish between clients providing high-quality updates and those introducing substantial noise due to strict privacy constraints. To address this gap, we present the first systematic study of privacy-aware client selection in DP-FL. We establish a theoretical foundation by deriving a convergence analysis that quantifies the impact of privacy heterogeneity on training error. Building on this analysis, we propose a privacy-aware client selection strategy, formulated as a convex optimization problem, that adaptively adjusts selection probabilities to minimize training error. Extensive experiments on benchmark datasets demonstrate that our approach achieves up to a 10% improvement in test accuracy on CIFAR-10 compared to existing baselines under heterogeneous privacy budgets. These results highlight the importance of incorporating privacy heterogeneity into client selection for practical and effective federated learning.
Related papers
- Federated Learning with Enhanced Privacy via Model Splitting and Random Client Participation [7.780051713043537]
Federated Learning (FL) often adopts differential privacy (DP) to protect client data, but the added noise required for privacy guarantees can substantially degrade model accuracy.<n>We propose model-splitting privacy-amplified federated learning (MS-PAFL)<n>In this framework, each client's model is partitioned into a private submodel, retained locally, and a public submodel, shared for global aggregation.
arXiv Detail & Related papers (2025-09-30T07:51:06Z) - Optimal Client Sampling in Federated Learning with Client-Level Heterogeneous Differential Privacy [8.683908900328237]
We propose GDPFed, which partitions clients into groups based on their privacy budgets and achieves client-level DP within each group to reduce privacy budget waste.<n>We also introduce GDPFed$+$, which integrates model sparsification to eliminate unnecessary noise and optimize per-group client sampling ratios.
arXiv Detail & Related papers (2025-05-19T18:55:34Z) - Multi-Objective Optimization for Privacy-Utility Balance in Differentially Private Federated Learning [12.278668095136098]
Federated learning (FL) enables collaborative model training across distributed clients without sharing raw data.<n>We propose an adaptive clipping mechanism that dynamically adjusts the clipping norm using a multi-objective optimization framework.<n>Our results show that adaptive clipping consistently outperforms fixed-clipping baselines, achieving improved accuracy under the same privacy constraints.
arXiv Detail & Related papers (2025-03-27T04:57:05Z) - Personalized Language Models via Privacy-Preserving Evolutionary Model Merging [53.97323896430374]
Personalization in language models aims to tailor model behavior to individual users or user groups.<n>We propose Privacy-Preserving Model Merging via Evolutionary Algorithms (PriME)<n>PriME employs gradient-free methods to directly optimize utility while reducing privacy risks.<n>Experiments on the LaMP benchmark show that PriME consistently outperforms a range of baselines, achieving up to a 45% improvement in task performance.
arXiv Detail & Related papers (2025-03-23T09:46:07Z) - Federated Learning With Individualized Privacy Through Client Sampling [2.0432201743624456]
We propose an adapted method for enabling Individualized Differential Privacy (IDP) in Federated Learning (FL)<n>We calculate client-specific sampling rates based on their heterogeneous privacy budgets and integrate them into a modified IDP-FedAvg algorithm.<n>The experimental results demonstrate that our approach achieves clear improvements over uniform DP baselines, reducing the trade-off between privacy and utility.
arXiv Detail & Related papers (2025-01-29T13:11:21Z) - Communication-Efficient and Privacy-Adaptable Mechanism for Federated Learning [54.20871516148981]
We introduce the Communication-Efficient and Privacy-Adaptable Mechanism (CEPAM)<n>CEPAM achieves communication efficiency and privacy protection simultaneously.<n>We theoretically analyze the privacy guarantee of CEPAM and investigate the trade-offs among user privacy and accuracy of CEPAM.
arXiv Detail & Related papers (2025-01-21T11:16:05Z) - Cross-silo Federated Learning with Record-level Personalized Differential Privacy [10.921330805109845]
Federated learning (FL) has emerged as a popular approach to better safeguard the privacy of client-side data by protecting clients' contributions during the training process.
Existing solutions typically assume a uniform privacy budget for all records and provide one-size-fits-all solutions that may not be adequate to meet each record's privacy requirement.
We devise a novel framework named textitrPDP-FL, employing a two-stage hybrid sampling scheme with both uniform client-level sampling and non-uniform record-level sampling to accommodate varying privacy requirements.
arXiv Detail & Related papers (2024-01-29T16:01:46Z) - Shuffled Differentially Private Federated Learning for Time Series Data
Analytics [10.198481976376717]
We develop a privacy-preserving federated learning algorithm for time series data.
Specifically, we employ local differential privacy to extend the privacy protection trust boundary to the clients.
We also incorporate shuffle techniques to achieve a privacy amplification, mitigating the accuracy decline caused by leveraging local differential privacy.
arXiv Detail & Related papers (2023-07-30T10:30:38Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Mixed Differential Privacy in Computer Vision [133.68363478737058]
AdaMix is an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data.
A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset.
arXiv Detail & Related papers (2022-03-22T06:15:43Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.