Differential Privacy Analysis of Decentralized Gossip Averaging under Varying Threat Models
- URL: http://arxiv.org/abs/2505.19969v1
- Date: Mon, 26 May 2025 13:31:43 GMT
- Title: Differential Privacy Analysis of Decentralized Gossip Averaging under Varying Threat Models
- Authors: Antti Koskela, Tejas Kulkarni,
- Abstract summary: We present a novel privacy analysis of decentralized gossip-based averaging algorithms with additive node-level noise.<n>Our main contribution is a new analytical framework that accurately characterizes privacy leakage across these scenarios.<n>We validate our analysis with numerical results demonstrating superior DP bounds compared to existing approaches.
- Score: 6.790905400046194
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fully decentralized training of machine learning models offers significant advantages in scalability, robustness, and fault tolerance. However, achieving differential privacy (DP) in such settings is challenging due to the absence of a central aggregator and varying trust assumptions among nodes. In this work, we present a novel privacy analysis of decentralized gossip-based averaging algorithms with additive node-level noise, both with and without secure summation over each node's direct neighbors. Our main contribution is a new analytical framework based on a linear systems formulation that accurately characterizes privacy leakage across these scenarios. This framework significantly improves upon prior analyses, for example, reducing the R\'enyi DP parameter growth from $O(T^2)$ to $O(T)$, where $T$ is the number of training rounds. We validate our analysis with numerical results demonstrating superior DP bounds compared to existing approaches. We further illustrate our analysis with a logistic regression experiment on MNIST image classification in a fully decentralized setting, demonstrating utility comparable to central aggregation methods.
Related papers
- Decentralized Differentially Private Power Method [4.58112062523768]
We propose a novel Decentralized Differentially Private Power Method (D-DP-PM) for performing Principal Component Analysis (PCA) in networked multi-agent settings.<n>Our method ensures $(epsilon,delta)$-Differential Privacy (DP) while enabling collaborative estimation of global eigenvectors across the network.<n> Experiments on real-world datasets demonstrate that D-DP-PM achieves superior privacy-utility tradeoffs compared to naive local DP approaches.
arXiv Detail & Related papers (2025-07-30T17:15:50Z) - Dyn-D$^2$P: Dynamic Differentially Private Decentralized Learning with Provable Utility Guarantee [36.82471440872803]
We propose a new Dynamic Differentially Private Decentralized DP approach (termed Dyn-D$2$P)<n>Dyn-D$2$P dynamically adjusts gradient clipping bounds and noise levels based on gradient convergence.<n>Experiments on benchmark datasets demonstrate the superiority of Dyn-D$2$ over its counterparts employing fixed-level noises.
arXiv Detail & Related papers (2025-05-10T13:57:57Z) - A Bayesian Approach to Data Point Selection [24.98069363998565]
Data point selection (DPS) is becoming a critical topic in deep learning.
Existing approaches to DPS are predominantly based on a bi-level optimisation (BLO) formulation.
We propose a novel Bayesian approach to DPS.
arXiv Detail & Related papers (2024-11-06T09:04:13Z) - Stability and Generalization of the Decentralized Stochastic Gradient
Descent Ascent Algorithm [80.94861441583275]
We investigate the complexity of the generalization bound of the decentralized gradient descent (D-SGDA) algorithm.
Our results analyze the impact of different top factors on the generalization of D-SGDA.
We also balance it with the generalization to obtain the optimal convex-concave setting.
arXiv Detail & Related papers (2023-10-31T11:27:01Z) - A Robustness Analysis of Blind Source Separation [91.3755431537592]
Blind source separation (BSS) aims to recover an unobserved signal from its mixture $X=f(S)$ under the condition that the transformation $f$ is invertible but unknown.
We present a general framework for analysing such violations and quantifying their impact on the blind recovery of $S$ from $X$.
We show that a generic BSS-solution in response to general deviations from its defining structural assumptions can be profitably analysed in the form of explicit continuity guarantees.
arXiv Detail & Related papers (2023-03-17T16:30:51Z) - Connect the Dots: Tighter Discrete Approximations of Privacy Loss
Distributions [49.726408540784334]
Key question in PLD-based accounting is how to approximate any (potentially continuous) PLD with a PLD over any specified discrete support.
We show that our pessimistic estimate is the best possible among all pessimistic estimates.
arXiv Detail & Related papers (2022-07-10T04:25:02Z) - On the Pitfalls of Heteroscedastic Uncertainty Estimation with
Probabilistic Neural Networks [23.502721524477444]
We present a synthetic example illustrating how this approach can lead to very poor but stable estimates.
We identify the culprit to be the log-likelihood loss, along with certain conditions that exacerbate the issue.
We present an alternative formulation, termed $beta$-NLL, in which each data point's contribution to the loss is weighted by the $beta$-exponentiated variance estimate.
arXiv Detail & Related papers (2022-03-17T08:46:17Z) - Revisiting PGD Attacks for Stability Analysis of Large-Scale Nonlinear
Systems and Perception-Based Control [2.2725929250900947]
We tailor the projected gradient descent (PGD) method developed in the adversarial learning community as a general-purpose ROA analysis tool.
We show that the ROA analysis can be approximated as a constrained problem whose goal is to find the worst-case initial condition.
We present two PGD-based iterative methods which can be used to solve the resultant constrained problem.
arXiv Detail & Related papers (2022-01-03T18:46:58Z) - Decentralized Local Stochastic Extra-Gradient for Variational
Inequalities [125.62877849447729]
We consider distributed variational inequalities (VIs) on domains with the problem data that is heterogeneous (non-IID) and distributed across many devices.
We make a very general assumption on the computational network that covers the settings of fully decentralized calculations.
We theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone settings.
arXiv Detail & Related papers (2021-06-15T17:45:51Z) - Optimising cost vs accuracy of decentralised analytics in fog computing
environments [0.4898659895355355]
Data gravity, a fundamental concept in Fog Computing, points towards decentralisation of computation for data analysis.
We propose an analytical framework able to find the optimal operating point in this continuum.
We show through simulations that the model accurately predicts the optimal trade-off, quite often an emphintermediate point between full centralisation and full decentralisation.
arXiv Detail & Related papers (2020-12-09T19:05:44Z) - Implicit Distributional Reinforcement Learning [61.166030238490634]
implicit distributional actor-critic (IDAC) built on two deep generator networks (DGNs)
Semi-implicit actor (SIA) powered by a flexible policy distribution.
We observe IDAC outperforms state-of-the-art algorithms on representative OpenAI Gym environments.
arXiv Detail & Related papers (2020-07-13T02:52:18Z) - A Unified Theory of Decentralized SGD with Changing Topology and Local
Updates [70.9701218475002]
We introduce a unified convergence analysis of decentralized communication methods.
We derive universal convergence rates for several applications.
Our proofs rely on weak assumptions.
arXiv Detail & Related papers (2020-03-23T17:49:15Z) - Distributional Robustness and Regularization in Reinforcement Learning [62.23012916708608]
We introduce a new regularizer for empirical value functions and show that it lower bounds the Wasserstein distributionally robust value function.
It suggests using regularization as a practical tool for dealing with $textitexternal uncertainty$ in reinforcement learning.
arXiv Detail & Related papers (2020-03-05T19:56:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.