Federated Learning with Enhanced Privacy via Model Splitting and Random Client Participation
- URL: http://arxiv.org/abs/2509.25906v1
- Date: Tue, 30 Sep 2025 07:51:06 GMT
- Title: Federated Learning with Enhanced Privacy via Model Splitting and Random Client Participation
- Authors: Yiwei Li, Shuai Wang, Zhuojun Tian, Xiuhua Wang, Shijian Su,
- Abstract summary: Federated Learning (FL) often adopts differential privacy (DP) to protect client data, but the added noise required for privacy guarantees can substantially degrade model accuracy.<n>We propose model-splitting privacy-amplified federated learning (MS-PAFL)<n>In this framework, each client's model is partitioned into a private submodel, retained locally, and a public submodel, shared for global aggregation.
- Score: 7.780051713043537
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) often adopts differential privacy (DP) to protect client data, but the added noise required for privacy guarantees can substantially degrade model accuracy. To resolve this challenge, we propose model-splitting privacy-amplified federated learning (MS-PAFL), a novel framework that combines structural model splitting with statistical privacy amplification. In this framework, each client's model is partitioned into a private submodel, retained locally, and a public submodel, shared for global aggregation. The calibrated Gaussian noise is injected only into the public submodel, thereby confining its adverse impact while preserving the utility of the local model. We further present a rigorous theoretical analysis that characterizes the joint privacy amplification achieved through random client participation and local data subsampling under this architecture. The analysis provides tight bounds on both single-round and total privacy loss, demonstrating that MS-PAFL significantly reduces the noise necessary to satisfy a target privacy protection level. Extensive experiments validate our theoretical findings, showing that MS-PAFL consistently attains a superior privacy-utility trade-off and enables the training of highly accurate models under strong privacy guarantees.
Related papers
- Tackling Privacy Heterogeneity in Differentially Private Federated Learning [33.2985262258717]
We present the first systematic study of privacy-aware client selection in Differentially private federated learning (DP-FL)<n>We propose a privacy-aware client selection strategy, formulated as a convex optimization problem, that adaptively adjusts selection probabilities to minimize training error.<n>Our approach achieves up to a 10% improvement in test accuracy on benchmark datasets.
arXiv Detail & Related papers (2026-02-26T05:20:37Z) - On the MIA Vulnerability Gap Between Private GANs and Diffusion Models [51.53790101362898]
Generative Adversarial Networks (GANs) and diffusion models have emerged as leading approaches for high-quality image synthesis.<n>We present the first unified theoretical and empirical analysis of the privacy risks faced by differentially private generative models.
arXiv Detail & Related papers (2025-09-03T14:18:22Z) - Enhancing Model Privacy in Federated Learning with Random Masking and Quantization [46.915409150222494]
The rise of large language models (LLMs) has introduced new challenges in distributed systems.<n>This highlights the need for a federated learning approach that can safeguard both sensitive data and proprietary models.<n>We propose FedQSN, a federated learning approach that leverages random masking to obscure a subnetwork of model parameters.
arXiv Detail & Related papers (2025-08-26T10:34:13Z) - Optimal Client Sampling in Federated Learning with Client-Level Heterogeneous Differential Privacy [8.683908900328237]
We propose GDPFed, which partitions clients into groups based on their privacy budgets and achieves client-level DP within each group to reduce privacy budget waste.<n>We also introduce GDPFed$+$, which integrates model sparsification to eliminate unnecessary noise and optimize per-group client sampling ratios.
arXiv Detail & Related papers (2025-05-19T18:55:34Z) - FedEM: A Privacy-Preserving Framework for Concurrent Utility Preservation in Federated Learning [17.853502904387376]
Federated Learning (FL) enables collaborative training of models across distributed clients without sharing local data, addressing privacy concerns in decentralized systems.<n>We propose Federated Error Minimization (FedEM), a novel algorithm that incorporates controlled perturbations through adaptive noise injection.<n> Experimental results on benchmark datasets demonstrate that FedEM significantly reduces privacy risks and preserves model accuracy, achieving a robust balance between privacy protection and utility preservation.
arXiv Detail & Related papers (2025-03-08T02:48:00Z) - Communication-Efficient and Privacy-Adaptable Mechanism for Federated Learning [54.20871516148981]
We introduce the Communication-Efficient and Privacy-Adaptable Mechanism (CEPAM)<n>CEPAM achieves communication efficiency and privacy protection simultaneously.<n>We theoretically analyze the privacy guarantee of CEPAM and investigate the trade-offs among user privacy and accuracy of CEPAM.
arXiv Detail & Related papers (2025-01-21T11:16:05Z) - Convergent Differential Privacy Analysis for General Federated Learning: the $f$-DP Perspective [57.35402286842029]
Federated learning (FL) is an efficient collaborative training paradigm with a focus on local privacy.
differential privacy (DP) is a classical approach to capture and ensure the reliability of private protections.
arXiv Detail & Related papers (2024-08-28T08:22:21Z) - FewFedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning [54.26614091429253]
Federated instruction tuning (FedIT) is a promising solution, by consolidating collaborative training across multiple data owners.
FedIT encounters limitations such as scarcity of instructional data and risk of exposure to training data extraction attacks.
We propose FewFedPIT, designed to simultaneously enhance privacy protection and model performance of federated few-shot learning.
arXiv Detail & Related papers (2024-03-10T08:41:22Z) - Stronger Privacy Amplification by Shuffling for R\'enyi and Approximate
Differential Privacy [43.33288245778629]
A key result in this model is that randomly shuffling locally randomized data amplifies differential privacy guarantees.
Such amplification implies substantially stronger privacy guarantees for systems in which data is contributed anonymously.
In this work, we improve the state of the art privacy amplification by shuffling results both theoretically and numerically.
arXiv Detail & Related papers (2022-08-09T08:13:48Z) - Renyi Differential Privacy of the Subsampled Shuffle Model in
Distributed Learning [7.197592390105457]
We study privacy in a distributed learning framework, where clients collaboratively build a learning model iteratively through interactions with a server from whom we need privacy.
Motivated by optimization and the federated learning (FL) paradigm, we focus on the case where a small fraction of data samples are randomly sub-sampled in each round.
To obtain even stronger local privacy guarantees, we study this in the shuffle privacy model, where each client randomizes its response using a local differentially private (LDP) mechanism.
arXiv Detail & Related papers (2021-07-19T11:43:24Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.