Decentralized Differentially Private Power Method
- URL: http://arxiv.org/abs/2507.22849v1
- Date: Wed, 30 Jul 2025 17:15:50 GMT
- Title: Decentralized Differentially Private Power Method
- Authors: Andrew Campbell, Anna Scaglione, Sean Peisert,
- Abstract summary: We propose a novel Decentralized Differentially Private Power Method (D-DP-PM) for performing Principal Component Analysis (PCA) in networked multi-agent settings.<n>Our method ensures $(epsilon,delta)$-Differential Privacy (DP) while enabling collaborative estimation of global eigenvectors across the network.<n> Experiments on real-world datasets demonstrate that D-DP-PM achieves superior privacy-utility tradeoffs compared to naive local DP approaches.
- Score: 4.58112062523768
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel Decentralized Differentially Private Power Method (D-DP-PM) for performing Principal Component Analysis (PCA) in networked multi-agent settings. Unlike conventional decentralized PCA approaches where each agent accesses the full n-dimensional sample space, we address the challenging scenario where each agent observes only a subset of dimensions through row-wise data partitioning. Our method ensures $(\epsilon,\delta)$-Differential Privacy (DP) while enabling collaborative estimation of global eigenvectors across the network without requiring a central aggregator. We achieve this by having agents share only local embeddings of the current eigenvector iterate, leveraging both the inherent privacy from random initialization and carefully calibrated Gaussian noise additions. We prove that our algorithm satisfies the prescribed $(\epsilon,\delta)$-DP guarantee and establish convergence rates that explicitly characterize the impact of the network topology. Our theoretical analysis, based on linear dynamics and high-dimensional probability theory, provides tight bounds on both privacy and utility. Experiments on real-world datasets demonstrate that D-DP-PM achieves superior privacy-utility tradeoffs compared to naive local DP approaches, with particularly strong performance in moderate privacy regimes ($\epsilon\in[2, 5]$). The method converges rapidly, allowing practitioners to trade iterations for enhanced privacy while maintaining competitive utility.
Related papers
- Differential Privacy Analysis of Decentralized Gossip Averaging under Varying Threat Models [6.790905400046194]
We present a novel privacy analysis of decentralized gossip-based averaging algorithms with additive node-level noise.<n>Our main contribution is a new analytical framework that accurately characterizes privacy leakage across these scenarios.<n>We validate our analysis with numerical results demonstrating superior DP bounds compared to existing approaches.
arXiv Detail & Related papers (2025-05-26T13:31:43Z) - PDSL: Privacy-Preserved Decentralized Stochastic Learning with Heterogeneous Data Distribution [8.055697728504326]
In decentralized learning, a group of agents collaborates to learn a global model using distributed datasets without a central server.<n>In this paper, we propose P, a novel privacy-preserved decentralized learning algorithm with heterogeneous data distribution.
arXiv Detail & Related papers (2025-03-31T04:58:05Z) - Differentially Private Policy Gradient [48.748194765816955]
We show that it is possible to find the right trade-off between privacy noise and trust-region size to obtain a performant differentially private policy gradient algorithm.<n>Our results and the complexity of the tasks addressed represent a significant improvement over existing DP algorithms in online RL.
arXiv Detail & Related papers (2025-01-31T12:11:13Z) - Differentially private and decentralized randomized power method [15.955127242261808]
This paper proposes enhanced privacy-preserving variants of the randomized power method.<n>First, we propose a variant that reduces the amount of the noise required in current techniques to achieve Differential Privacy.<n>Second, we adapt our method to a decentralized framework in which data is distributed among multiple users.
arXiv Detail & Related papers (2024-11-04T09:53:03Z) - Enhanced Privacy Bound for Shuffle Model with Personalized Privacy [32.08637708405314]
Differential Privacy (DP) is an enhanced privacy protocol which introduces an intermediate trusted server between local users and a central data curator.
It significantly amplifies the central DP guarantee by anonymizing and shuffling the local randomized data.
This work focuses on deriving the central privacy bound for a more practical setting where personalized local privacy is required by each user.
arXiv Detail & Related papers (2024-07-25T16:11:56Z) - Provable Privacy with Non-Private Pre-Processing [56.770023668379615]
We propose a general framework to evaluate the additional privacy cost incurred by non-private data-dependent pre-processing algorithms.
Our framework establishes upper bounds on the overall privacy guarantees by utilising two new technical notions.
arXiv Detail & Related papers (2024-03-19T17:54:49Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - Differentially Private Federated Clustering over Non-IID Data [59.611244450530315]
clustering clusters (FedC) problem aims to accurately partition unlabeled data samples distributed over massive clients into finite clients under the orchestration of a server.
We propose a novel FedC algorithm using differential privacy convergence technique, referred to as DP-Fed, in which partial participation and multiple clients are also considered.
Various attributes of the proposed DP-Fed are obtained through theoretical analyses of privacy protection, especially for the case of non-identically and independently distributed (non-i.i.d.) data.
arXiv Detail & Related papers (2023-01-03T05:38:43Z) - Muffliato: Peer-to-Peer Privacy Amplification for Decentralized Optimization and Averaging [20.39986955578245]
We introduce pairwise network differential privacy, a relaxation of Local Differential Privacy (LDP)
We derive a differentially private decentralized optimization algorithm that alternates between local gradient descent steps and gossip averaging.
Our results show that our algorithms amplify privacy guarantees as a function of the distance between nodes in the graph.
arXiv Detail & Related papers (2022-06-10T13:32:35Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Differentially Private Federated Bayesian Optimization with Distributed
Exploration [48.9049546219643]
We introduce differential privacy (DP) into the training of deep neural networks through a general framework for adding DP to iterative algorithms.
We show that DP-FTS-DE achieves high utility (competitive performance) with a strong privacy guarantee.
We also use real-world experiments to show that DP-FTS-DE induces a trade-off between privacy and utility.
arXiv Detail & Related papers (2021-10-27T04:11:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.