First Provable Guarantees for Practical Private FL: Beyond Restrictive Assumptions
- URL: http://arxiv.org/abs/2512.21521v1
- Date: Thu, 25 Dec 2025 06:05:15 GMT
- Title: First Provable Guarantees for Practical Private FL: Beyond Restrictive Assumptions
- Authors: Egor Shulgin, Grigory Malinovsky, Sarit Khirirat, Peter Richtárik,
- Abstract summary: Fed-$$-NormEC is the first differentially private FL framework providing provable convergence and DP guarantees under standard assumptions.<n>Fed-$$-NormE integrates local updates, separate server and client stepsizes, and, crucially, partial client participation.
- Score: 52.82254388526969
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) enables collaborative training on decentralized data. Differential privacy (DP) is crucial for FL, but current private methods often rely on unrealistic assumptions (e.g., bounded gradients or heterogeneity), hindering practical application. Existing works that relax these assumptions typically neglect practical FL features, including multiple local updates and partial client participation. We introduce Fed-$α$-NormEC, the first differentially private FL framework providing provable convergence and DP guarantees under standard assumptions while fully supporting these practical features. Fed-$α$-NormE integrates local updates (full and incremental gradient steps), separate server and client stepsizes, and, crucially, partial client participation, which is essential for real-world deployment and vital for privacy amplification. Our theoretical guarantees are corroborated by experiments on private deep learning tasks.
Related papers
- Whom to Trust? Adaptive Collaboration in Personalized Federated Learning [11.923664505655026]
We show that adaptivity in collaboration and fine-grained trust, at the level of individual examples, can be achieved within federated semi-supervised learning.<n>We develop FEDMOSAIC, a personalized co-training method where clients reweight their loss and their contribution to pseudo-labels based on per-example agreement and confidence.
arXiv Detail & Related papers (2025-06-30T20:53:01Z) - Federated Unlearning Made Practical: Seamless Integration via Negated Pseudo-Gradients [12.27654537722943]
This paper introduces a novel method that leverages forgetting Pseudo-gradients Updates for Federated Unlearning (PUF)<n>Unlike state-of-the-art mechanisms, PUF seamlessly integrates with FL, incurs no additional computational and communication overhead beyond standard FL rounds, and supports concurrent unlearning requests.
arXiv Detail & Related papers (2025-04-08T09:05:33Z) - Client-Centric Federated Adaptive Optimization [78.30827455292827]
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.<n>We propose Federated-Centric Adaptive Optimization, which is a class of novel federated optimization approaches.
arXiv Detail & Related papers (2025-01-17T04:00:50Z) - FLea: Addressing Data Scarcity and Label Skew in Federated Learning via Privacy-preserving Feature Augmentation [15.298650496155508]
Federated Learning (FL) enables model development by leveraging data distributed across numerous edge devices without transferring local data to a central server.
Existing FL methods face challenges when dealing with scarce and label-skewed data across devices, resulting in local model overfitting and drift.
We propose a pioneering framework called textitFLea, incorporating the following key components.
arXiv Detail & Related papers (2023-12-04T20:24:09Z) - Learning to Specialize: Joint Gating-Expert Training for Adaptive MoEs in Decentralized Settings [41.98633628526484]
Mixture-of-Experts (MoEs) achieve scalability by dynamically activating subsets of their components.<n>Motivated by inference costs and data heterogeneity, we study how joint training of gating functions and experts can allocate domain-specific expertise.
arXiv Detail & Related papers (2023-06-14T15:47:52Z) - Privacy Preserving Bayesian Federated Learning in Heterogeneous Settings [20.33482170846688]
This paper presents a unified federated learning framework based on customized local Bayesian models that learn well even in the absence of large local datasets.
We use priors in the functional (output) space of the networks to facilitate collaboration across heterogeneous clients.
Experiments on standard FL datasets demonstrate that our approach outperforms strong baselines in both homogeneous and heterogeneous settings.
arXiv Detail & Related papers (2023-06-13T17:55:30Z) - Towards More Suitable Personalization in Federated Learning via
Decentralized Partial Model Training [67.67045085186797]
Almost all existing systems have to face large communication burdens if the central FL server fails.
It personalizes the "right" in the deep models by alternately updating the shared and personal parameters.
To further promote the shared parameters aggregation process, we propose DFed integrating the local Sharpness Miniization.
arXiv Detail & Related papers (2023-05-24T13:52:18Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - Personalized Privacy-Preserving Framework for Cross-Silo Federated
Learning [0.0]
Federated learning (FL) is a promising decentralized deep learning (DL) framework that enables DL-based approaches trained collaboratively across clients without sharing private data.
In this paper, we propose a novel framework, namely Personalized Privacy-Preserving Federated Learning (PPPFL)
Our proposed framework outperforms multiple FL baselines on different datasets, including MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100.
arXiv Detail & Related papers (2023-02-22T07:24:08Z) - Federated Learning with Privacy-Preserving Ensemble Attention
Distillation [63.39442596910485]
Federated Learning (FL) is a machine learning paradigm where many local nodes collaboratively train a central model while keeping the training data decentralized.
We propose a privacy-preserving FL framework leveraging unlabeled public data for one-way offline knowledge distillation.
Our technique uses decentralized and heterogeneous local data like existing FL approaches, but more importantly, it significantly reduces the risk of privacy leakage.
arXiv Detail & Related papers (2022-10-16T06:44:46Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.