An Adaptive Differentially Private Federated Learning Framework with Bi-level Optimization
- URL: http://arxiv.org/abs/2602.06838v1
- Date: Fri, 06 Feb 2026 16:27:33 GMT
- Title: An Adaptive Differentially Private Federated Learning Framework with Bi-level Optimization
- Authors: Jin Wang, Hui Ma, Fei Xing, Ming Yan,
- Abstract summary: Federated learning enables collaborative model training across distributed clients while preserving data privacy.<n>In practical deployments, device heterogeneity, non-independent, and identically distributed (Non-IID) data often lead to highly unstable and biased gradient updates.<n>We propose an adaptive differentially private federated learning framework that explicitly targets model efficiency under heterogeneous and privacy-constrained settings.
- Score: 10.218291445871435
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning enables collaborative model training across distributed clients while preserving data privacy. However, in practical deployments, device heterogeneity, non-independent, and identically distributed (Non-IID) data often lead to highly unstable and biased gradient updates. When differential privacy is enforced, conventional fixed gradient clipping and Gaussian noise injection may further amplify gradient perturbations, resulting in training oscillation and performance degradation and degraded model performance. To address these challenges, we propose an adaptive differentially private federated learning framework that explicitly targets model efficiency under heterogeneous and privacy-constrained settings. On the client side, a lightweight local compressed module is introduced to regularize intermediate representations and constrain gradient variability, thereby mitigating noise amplification during local optimization. On the server side, an adaptive gradient clipping strategy dynamically adjusts clipping thresholds based on historical update statistics to avoid over-clipping and noise domination. Furthermore, a constraint-aware aggregation mechanism is designed to suppress unreliable or noise-dominated client updates and stabilize global optimization. Extensive experiments on CIFAR-10 and SVHN demonstrate improved convergence stability and classification accuracy.
Related papers
- Not All Preferences Are Created Equal: Stability-Aware and Gradient-Efficient Alignment for Reasoning Models [52.48582333951919]
We propose a dynamic framework designed to enhance alignment reliability by maximizing the Signal-to-Noise Ratio of policy updates.<n>SAGE (Stability-Aware Gradient Efficiency) integrates a coarse-grained curriculum mechanism that refreshes candidate pools based on model competence.<n> Experiments on multiple mathematical reasoning benchmarks demonstrate that SAGE significantly accelerates convergence and outperforms static baselines.
arXiv Detail & Related papers (2026-02-01T12:56:10Z) - Adaptive Dual-Weighting Framework for Federated Learning via Out-of-Distribution Detection [53.45696787935487]
Federated Learning (FL) enables collaborative model training across large-scale distributed service nodes.<n>In real-world service-oriented deployments, data generated by heterogeneous users, devices, and application scenarios are inherently non-IID.<n>We propose FLood, a novel FL framework inspired by out-of-distribution (OOD) detection.
arXiv Detail & Related papers (2026-02-01T05:54:59Z) - CO-PFL: Contribution-Oriented Personalized Federated Learning for Heterogeneous Networks [51.43780477302533]
Contribution-Oriented PFL (CO-PFL) is a novel algorithm that dynamically estimates each client's contribution for global aggregation.<n>CO-PFL consistently surpasses state-of-the-art methods in robustness in personalization accuracy, robustness, scalability and convergence stability.
arXiv Detail & Related papers (2025-10-23T05:10:06Z) - ADP-VRSGP: Decentralized Learning with Adaptive Differential Privacy via Variance-Reduced Stochastic Gradient Push [24.62800551742178]
We propose a novel approach called decentralized learning with adaptive differential privacy via variance-reduced gradient push.<n>This method dynamically adjusts both the noise variance and the learning rate using a stepwise-decaying schedule.<n>We show that ADP-VRSGP achieves robust convergence with an appropriate learning rate, significantly improving training stability and speed.
arXiv Detail & Related papers (2025-10-23T03:14:59Z) - GCFL: A Gradient Correction-based Federated Learning Framework for Privacy-preserving CPSS [7.171222892215083]
Federated learning, as a distributed architecture, shows great promise for applications in Cyber-Physical-Social Systems (CPSS)<n>This paper proposes a novel framework for differentially private federated learning that balances rigorous privacy guarantees with accuracy by introducing a server-side gradient correction mechanism.<n>We evaluate our framework on several benchmark datasets, and the experimental results demonstrate that it achieves state-of-the-art performance under the same privacy budget.
arXiv Detail & Related papers (2025-06-04T06:52:37Z) - Adaptive Deadline and Batch Layered Synchronized Federated Learning [66.93447103966439]
Federated learning (FL) enables collaborative model training across distributed edge devices while preserving data privacy, and typically operates in a round-based synchronous manner.<n>We propose ADEL-FL, a novel framework that jointly optimize per-round deadlines and user-specific batch sizes for layer-wise aggregation.
arXiv Detail & Related papers (2025-05-29T19:59:18Z) - CANet: ChronoAdaptive Network for Enhanced Long-Term Time Series Forecasting under Non-Stationarity [0.0]
We introduce a novel architecture, ChoronoAdaptive Network (CANet), inspired by style-transfer techniques.<n>The core of CANet is the Non-stationary Adaptive Normalization module, seamlessly integrating the Style Blending Gate and Adaptive Instance Normalization (AdaIN)<n> experiments on real-world datasets validate CANet's superiority over state-of-the-art methods, achieving a 42% reduction in MSE and a 22% reduction in MAE.
arXiv Detail & Related papers (2025-04-24T20:05:33Z) - Multi-Objective Optimization for Privacy-Utility Balance in Differentially Private Federated Learning [12.278668095136098]
Federated learning (FL) enables collaborative model training across distributed clients without sharing raw data.<n>We propose an adaptive clipping mechanism that dynamically adjusts the clipping norm using a multi-objective optimization framework.<n>Our results show that adaptive clipping consistently outperforms fixed-clipping baselines, achieving improved accuracy under the same privacy constraints.
arXiv Detail & Related papers (2025-03-27T04:57:05Z) - FedAWA: Adaptive Optimization of Aggregation Weights in Federated Learning Using Client Vectors [50.131271229165165]
Federated Learning (FL) has emerged as a promising framework for distributed machine learning.<n>Data heterogeneity resulting from differences across user behaviors, preferences, and device characteristics poses a significant challenge for federated learning.<n>We propose Adaptive Weight Aggregation (FedAWA), a novel method that adaptively adjusts aggregation weights based on client vectors during the learning process.
arXiv Detail & Related papers (2025-03-20T04:49:40Z) - An Architecture Built for Federated Learning: Addressing Data Heterogeneity through Adaptive Normalization-Free Feature Recalibration [0.3481075494213406]
We propose Adaptive Normalization-free Feature Recalibration (ANFR) to combat heterogeneous data in Federated Learning (FL)<n>ANFR combines weight standardization and channel attention to produce learnable scaling factors for feature maps.<n>Experiments show ANFR consistently outperforms established baselines across various aggregation methods.
arXiv Detail & Related papers (2024-10-02T20:16:56Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.