JSAM: Privacy Straggler-Resilient Joint Client Selection and Incentive Mechanism Design in Differentially Private Federated Learning
- URL: http://arxiv.org/abs/2602.21844v1
- Date: Wed, 25 Feb 2026 12:22:48 GMT
- Title: JSAM: Privacy Straggler-Resilient Joint Client Selection and Incentive Mechanism Design in Differentially Private Federated Learning
- Authors: Ruichen Xu, Ying-Jun Angela Zhang, Jianwei Huang,
- Abstract summary: Existing incentive mechanisms rely on unbiased client selection, forcing servers to compensate even the most privacy-sensitive clients.<n>We introduce JSAM, a Bayesian-optimal framework that simultaneously optimize client selection probabilities and privacy compensation.<n>We prove that servers should preferentially select privacy-tolerant clients while excluding high-sensitivity participants, and uncover the counter-intuitive insight that clients with minimal privacy sensitivity may incur the highest cumulative costs due to frequent participation.
- Score: 33.2985262258717
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differentially private federated learning faces a fundamental tension: privacy protection mechanisms that safeguard client data simultaneously create quantifiable privacy costs that discourage participation, undermining the collaborative training process. Existing incentive mechanisms rely on unbiased client selection, forcing servers to compensate even the most privacy-sensitive clients ("privacy stragglers"), leading to systemic inefficiency and suboptimal resource allocation. We introduce JSAM (Joint client Selection and privacy compensAtion Mechanism), a Bayesian-optimal framework that simultaneously optimizes client selection probabilities and privacy compensation to maximize training effectiveness under budget constraints. Our approach transforms a complex 2N-dimensional optimization problem into an efficient three-dimensional formulation through novel theoretical characterization of optimal selection strategies. We prove that servers should preferentially select privacy-tolerant clients while excluding high-sensitivity participants, and uncover the counter-intuitive insight that clients with minimal privacy sensitivity may incur the highest cumulative costs due to frequent participation. Extensive evaluations on MNIST and CIFAR-10 demonstrate that JSAM achieves up to 15% improvement in test accuracy compared to existing unbiased selection mechanisms while maintaining cost efficiency across varying data heterogeneity levels.
Related papers
- Tackling Privacy Heterogeneity in Differentially Private Federated Learning [33.2985262258717]
We present the first systematic study of privacy-aware client selection in Differentially private federated learning (DP-FL)<n>We propose a privacy-aware client selection strategy, formulated as a convex optimization problem, that adaptively adjusts selection probabilities to minimize training error.<n>Our approach achieves up to a 10% improvement in test accuracy on benchmark datasets.
arXiv Detail & Related papers (2026-02-26T05:20:37Z) - CoSIFL: Collaborative Secure and Incentivized Federated Learning with Differential Privacy [1.1266158555540042]
CoSIFL is a framework that integrates proactive alarming for robust security and local differential privacy.<n>A Tullock contest-inspired incentive module rewards honest clients for both data contributions and reliable alarm triggers.<n>We prove that the server-client game admits a unique equilibrium, and analyze how clients' multi-dimensional attributes - such as non-IID degrees and privacy budgets - jointly affect system efficiency.
arXiv Detail & Related papers (2025-09-27T08:45:40Z) - Optimal Client Sampling in Federated Learning with Client-Level Heterogeneous Differential Privacy [8.683908900328237]
We propose GDPFed, which partitions clients into groups based on their privacy budgets and achieves client-level DP within each group to reduce privacy budget waste.<n>We also introduce GDPFed$+$, which integrates model sparsification to eliminate unnecessary noise and optimize per-group client sampling ratios.
arXiv Detail & Related papers (2025-05-19T18:55:34Z) - Personalized Language Models via Privacy-Preserving Evolutionary Model Merging [53.97323896430374]
Personalization in language models aims to tailor model behavior to individual users or user groups.<n>We propose Privacy-Preserving Model Merging via Evolutionary Algorithms (PriME)<n>PriME employs gradient-free methods to directly optimize utility while reducing privacy risks.<n>Experiments on the LaMP benchmark show that PriME consistently outperforms a range of baselines, achieving up to a 45% improvement in task performance.
arXiv Detail & Related papers (2025-03-23T09:46:07Z) - Revisiting Locally Differentially Private Protocols: Towards Better Trade-offs in Privacy, Utility, and Attack Resistance [4.5282933786221395]
Local Differential Privacy (LDP) offers strong privacy protection, especially in settings in which the server collecting the data is untrusted.<n>We introduce a general multi-objective optimization framework for refining LDP protocols.<n>Our framework enables modular and context-aware deployment of LDP mechanisms with tunable privacy-utility trade-offs.
arXiv Detail & Related papers (2025-03-03T12:41:01Z) - Adaptive Client Selection in Federated Learning: A Network Anomaly Detection Use Case [0.30723404270319693]
This paper introduces a client selection framework for Federated Learning (FL) that incorporates differential privacy and fault tolerance.<n>Results demonstrate up to a 7% improvement in accuracy and a 25% reduction in training time compared to the FedL2P approach.
arXiv Detail & Related papers (2025-01-25T02:50:46Z) - Communication-Efficient and Privacy-Adaptable Mechanism for Federated Learning [54.20871516148981]
We introduce the Communication-Efficient and Privacy-Adaptable Mechanism (CEPAM)<n>CEPAM achieves communication efficiency and privacy protection simultaneously.<n>We theoretically analyze the privacy guarantee of CEPAM and investigate the trade-offs among user privacy and accuracy of CEPAM.
arXiv Detail & Related papers (2025-01-21T11:16:05Z) - Emulating Full Participation: An Effective and Fair Client Selection Strategy for Federated Learning [50.060154488277036]
In federated learning, client selection is a critical problem that significantly impacts both model performance and fairness.<n>We propose two guiding principles that tackle the inherent conflict between the two metrics while reinforcing each other.<n>Our approach adaptively enhances this diversity by selecting clients based on their data distributions, thereby improving both model performance and fairness.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Shuffled Model of Federated Learning: Privacy, Communication and
Accuracy Trade-offs [30.58690911428577]
We consider a distributed empirical risk minimization (ERM) optimization problem with communication efficiency and privacy requirements.
We develop (optimal) communication-efficient schemes for private mean estimation for several $ell_p$ spaces.
We demonstrate that one can get the same privacy, optimization-performance operating point developed in recent methods that use full-precision communication.
arXiv Detail & Related papers (2020-08-17T09:41:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.