Differential Privacy Personalized Federated Learning Based on Dynamically Sparsified Client Updates
- URL: http://arxiv.org/abs/2503.09192v1
- Date: Wed, 12 Mar 2025 09:34:05 GMT
- Title: Differential Privacy Personalized Federated Learning Based on Dynamically Sparsified Client Updates
- Authors: Chuanyin Wang, Yifei Zhang, Neng Gao, Qiang Luo,
- Abstract summary: We propose a differentially private personalized federated learning approach that employs dynamically sparsified client updates.<n> Experimental results on EMNIST, CIFAR-10, and CIFAR-100 demonstrate that our proposed scheme achieves superior performance.
- Score: 12.373620724244475
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personalized federated learning is extensively utilized in scenarios characterized by data heterogeneity, facilitating more efficient and automated local training on data-owning terminals. This includes the automated selection of high-performance model parameters for upload, thereby enhancing the overall training process. However, it entails significant risks of privacy leakage. Existing studies have attempted to mitigate these risks by utilizing differential privacy. Nevertheless, these studies present two major limitations: (1) The integration of differential privacy into personalized federated learning lacks sufficient personalization, leading to the introduction of excessive noise into the model. (2) It fails to adequately control the spatial scope of model update information, resulting in a suboptimal balance between data privacy and model effectiveness in differential privacy federated learning. In this paper, we propose a differentially private personalized federated learning approach that employs dynamically sparsified client updates through reparameterization and adaptive norm(DP-pFedDSU). Reparameterization training effectively selects personalized client update information, thereby reducing the quantity of updates. This approach minimizes the introduction of noise to the greatest extent possible. Additionally, dynamic adaptive norm refers to controlling the norm space of model updates during the training process, mitigating the negative impact of clipping on the update information. These strategies substantially enhance the effective integration of differential privacy and personalized federated learning. Experimental results on EMNIST, CIFAR-10, and CIFAR-100 demonstrate that our proposed scheme achieves superior performance and is well-suited for more complex personalized federated learning scenarios.
Related papers
- Multi-Objective Optimization for Privacy-Utility Balance in Differentially Private Federated Learning [12.278668095136098]
Federated learning (FL) enables collaborative model training across distributed clients without sharing raw data.
We propose an adaptive clipping mechanism that dynamically adjusts the clipping norm using a multi-objective optimization framework.
Our results show that adaptive clipping consistently outperforms fixed-clipping baselines, achieving improved accuracy under the same privacy constraints.
arXiv Detail & Related papers (2025-03-27T04:57:05Z) - FedAWA: Adaptive Optimization of Aggregation Weights in Federated Learning Using Client Vectors [50.131271229165165]
Federated Learning (FL) has emerged as a promising framework for distributed machine learning.
Data heterogeneity resulting from differences across user behaviors, preferences, and device characteristics poses a significant challenge for federated learning.
We propose Adaptive Weight Aggregation (FedAWA), a novel method that adaptively adjusts aggregation weights based on client vectors during the learning process.
arXiv Detail & Related papers (2025-03-20T04:49:40Z) - Advancing Personalized Federated Learning: Integrative Approaches with AI for Enhanced Privacy and Customization [0.0]
This paper proposes a novel approach that enhances PFL with cutting-edge AI techniques.<n>We present a model that boosts the performance of individual client models and ensures robust privacy-preserving mechanisms.<n>This work paves the way for a new era of truly personalized and privacy-conscious AI systems.
arXiv Detail & Related papers (2025-01-30T07:03:29Z) - Adaptive Client Selection in Federated Learning: A Network Anomaly Detection Use Case [0.30723404270319693]
This paper introduces a client selection framework for Federated Learning (FL) that incorporates differential privacy and fault tolerance.<n>Results demonstrate up to a 7% improvement in accuracy and a 25% reduction in training time compared to the FedL2P approach.
arXiv Detail & Related papers (2025-01-25T02:50:46Z) - Optimal Strategies for Federated Learning Maintaining Client Privacy [8.518748080337838]
This paper studies the tradeoff between model performance and communication of the Federated Learning system.<n>We show that training for one local epoch per global round of training gives optimal performance while preserving the same privacy budget.
arXiv Detail & Related papers (2025-01-24T12:34:38Z) - Efficient Federated Unlearning with Adaptive Differential Privacy Preservation [15.8083997286637]
Federated unlearning (FU) offers a promising solution to erase the impact of specific clients' data on the global model in federated learning (FL)
Current state-of-the-art FU methods extend traditional FL frameworks by leveraging stored historical updates.
We propose FedADP, a method designed to achieve both efficiency and privacy preservation in FU.
arXiv Detail & Related papers (2024-11-17T11:45:15Z) - Efficient and Robust Regularized Federated Recommendation [52.24782464815489]
The recommender system (RSRS) addresses both user preference and privacy concerns.
We propose a novel method that incorporates non-uniform gradient descent to improve communication efficiency.
RFRecF's superior robustness compared to diverse baselines.
arXiv Detail & Related papers (2024-11-03T12:10:20Z) - FewFedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning [54.26614091429253]
Federated instruction tuning (FedIT) is a promising solution, by consolidating collaborative training across multiple data owners.
FedIT encounters limitations such as scarcity of instructional data and risk of exposure to training data extraction attacks.
We propose FewFedPIT, designed to simultaneously enhance privacy protection and model performance of federated few-shot learning.
arXiv Detail & Related papers (2024-03-10T08:41:22Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Personalization Improves Privacy-Accuracy Tradeoffs in Federated
Optimization [57.98426940386627]
We show that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy.
We illustrate our theoretical results with experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-02-10T20:44:44Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.