Mitigating Privacy-Utility Trade-off in Decentralized Federated Learning via $f$-Differential Privacy
- URL: http://arxiv.org/abs/2510.19934v1
- Date: Wed, 22 Oct 2025 18:01:08 GMT
- Title: Mitigating Privacy-Utility Trade-off in Decentralized Federated Learning via $f$-Differential Privacy
- Authors: Xiang Li, Buxin Su, Chendi Wang, Qi Long, Weijie J. Su,
- Abstract summary: Decentralized Federated Learning (FL) allows local users to collaborate without sharing their data with a central server.<n> accurately quantifying the privacy budget of private FL algorithms is challenging due to the co-existence of complex algorithmic components.<n>This paper addresses privacy accounting for two decentralized FL algorithms within the $f$-differential privacy ($f$-DP) framework.
- Score: 27.280907787306642
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Differentially private (DP) decentralized Federated Learning (FL) allows local users to collaborate without sharing their data with a central server. However, accurately quantifying the privacy budget of private FL algorithms is challenging due to the co-existence of complex algorithmic components such as decentralized communication and local updates. This paper addresses privacy accounting for two decentralized FL algorithms within the $f$-differential privacy ($f$-DP) framework. We develop two new $f$-DP-based accounting methods tailored to decentralized settings: Pairwise Network $f$-DP (PN-$f$-DP), which quantifies privacy leakage between user pairs under random-walk communication, and Secret-based $f$-Local DP (Sec-$f$-LDP), which supports structured noise injection via shared secrets. By combining tools from $f$-DP theory and Markov chain concentration, our accounting framework captures privacy amplification arising from sparse communication, local iterations, and correlated noise. Experiments on synthetic and real datasets demonstrate that our methods yield consistently tighter $(\epsilon,\delta)$ bounds and improved utility compared to R\'enyi DP-based approaches, illustrating the benefits of $f$-DP in decentralized privacy accounting.
Related papers
- DP-CSGP: Differentially Private Stochastic Gradient Push with Compressed Communication [71.60998478544028]
We propose Differentially Private Gradient Push with Compressed communication (termedfrac-CSGP) for decentralized learning graphs.<n>For general non-math and smooth objective functions, we show that our algorithm is designed to maintain high accuracy and efficient communication.
arXiv Detail & Related papers (2025-12-15T17:37:02Z) - Information-Theoretic Decentralized Secure Aggregation with Collusion Resilience [95.33295072401832]
We study the problem of decentralized secure aggregation (DSA) from an information-theoretic perspective.<n>We characterize the optimal rate region, which specifies the minimum achievable communication and secret key rates for DSA.<n>Our results establish the fundamental performance limits of DSA, providing insights for the design of provably secure and communication-efficient protocols.
arXiv Detail & Related papers (2025-08-01T12:51:37Z) - Decentralized Differentially Private Power Method [4.58112062523768]
We propose a novel Decentralized Differentially Private Power Method (D-DP-PM) for performing Principal Component Analysis (PCA) in networked multi-agent settings.<n>Our method ensures $(epsilon,delta)$-Differential Privacy (DP) while enabling collaborative estimation of global eigenvectors across the network.<n> Experiments on real-world datasets demonstrate that D-DP-PM achieves superior privacy-utility tradeoffs compared to naive local DP approaches.
arXiv Detail & Related papers (2025-07-30T17:15:50Z) - Differential Privacy on Trust Graphs [54.55190841518906]
We study differential privacy (DP) in a multi-party setting where each party only trusts a (known) subset of the other parties with its data.
We give a DP algorithm for aggregation with a much better privacy-utility trade-off than in the well-studied local model of DP.
arXiv Detail & Related papers (2024-10-15T20:31:04Z) - Convergent Differential Privacy Analysis for General Federated Learning: the $f$-DP Perspective [57.35402286842029]
Federated learning (FL) is an efficient collaborative training paradigm with a focus on local privacy.
differential privacy (DP) is a classical approach to capture and ensure the reliability of private protections.
arXiv Detail & Related papers (2024-08-28T08:22:21Z) - The Privacy Power of Correlated Noise in Decentralized Learning [39.48990597191246]
We propose Decor, a variant of decentralized SGD with differential privacy guarantees.
We do so under SecLDP, our new relaxation of local DP, which protects all user communications against an external eavesdropper and curious users.
arXiv Detail & Related papers (2024-05-02T06:14:56Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Muffliato: Peer-to-Peer Privacy Amplification for Decentralized Optimization and Averaging [20.39986955578245]
We introduce pairwise network differential privacy, a relaxation of Local Differential Privacy (LDP)
We derive a differentially private decentralized optimization algorithm that alternates between local gradient descent steps and gossip averaging.
Our results show that our algorithms amplify privacy guarantees as a function of the distance between nodes in the graph.
arXiv Detail & Related papers (2022-06-10T13:32:35Z) - Privacy Amplification via Shuffling for Linear Contextual Bandits [51.94904361874446]
We study the contextual linear bandit problem with differential privacy (DP)
We show that it is possible to achieve a privacy/utility trade-off between JDP and LDP by leveraging the shuffle model of privacy.
Our result shows that it is possible to obtain a tradeoff between JDP and LDP by leveraging the shuffle model while preserving local privacy.
arXiv Detail & Related papers (2021-12-11T15:23:28Z) - A(DP)$^2$SGD: Asynchronous Decentralized Parallel Stochastic Gradient
Descent with Differential Privacy [15.038697541988746]
A popular distributed learning strategy is federated learning, where there is a central server storing the global model and a set of local computing nodes updating the model parameters with their corresponding data.
In this paper, we present a differentially private version of asynchronous decentralized parallel SGD framework, or A(DP)$2$SGD for short, which maintains communication efficiency of ADPSGD and prevents the inference from malicious participants.
arXiv Detail & Related papers (2020-08-21T00:56:22Z) - User-Level Privacy-Preserving Federated Learning: Analysis and
Performance Optimization [77.43075255745389]
Federated learning (FL) is capable of preserving private data from mobile terminals (MTs) while training the data into useful models.
From a viewpoint of information theory, it is still possible for a curious server to infer private information from the shared models uploaded by MTs.
We propose a user-level differential privacy (UDP) algorithm by adding artificial noise to the shared models before uploading them to servers.
arXiv Detail & Related papers (2020-02-29T10:13:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.