Differential Privacy for Network Assortativity
- URL: http://arxiv.org/abs/2505.03639v1
- Date: Tue, 06 May 2025 15:40:47 GMT
- Title: Differential Privacy for Network Assortativity
- Authors: Fei Ma, Jinzhi Ouyang, Xincheng Hu,
- Abstract summary: The analysis of network assortativity is of great importance for understanding the structural characteristics of and dynamics upon networks.<n>It is well known that a network may contain sensitive information, such as the number of friends of an individual in a social network.<n>In this work, we are the first to propose approaches based on differential privacy (DP). Specifically, we design three DP-based algorithms: $Local_ru$, $Shuffle_ru$, and $Decentral_ru$.
- Score: 3.462869032423588
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The analysis of network assortativity is of great importance for understanding the structural characteristics of and dynamics upon networks. Often, network assortativity is quantified using the assortativity coefficient that is defined based on the Pearson correlation coefficient between vertex degrees. It is well known that a network may contain sensitive information, such as the number of friends of an individual in a social network (which is abstracted as the degree of vertex.). So, the computation of the assortativity coefficient leads to privacy leakage, which increases the urgent need for privacy-preserving protocol. However, there has been no scheme addressing the concern above. To bridge this gap, in this work, we are the first to propose approaches based on differential privacy (DP for short). Specifically, we design three DP-based algorithms: $Local_{ru}$, $Shuffle_{ru}$, and $Decentral_{ru}$. The first two algorithms, based on Local DP (LDP) and Shuffle DP respectively, are designed for settings where each individual only knows his/her direct friends. In contrast, the third algorithm, based on Decentralized DP (DDP), targets scenarios where each individual has a broader view, i.e., also knowing his/her friends' friends. Theoretically, we prove that each algorithm enables an unbiased estimation of the assortativity coefficient of the network. We further evaluate the performance of the proposed algorithms using mean squared error (MSE), showing that $Shuffle_{ru}$ achieves the best performance, followed by $Decentral_{ru}$, with $Local_{ru}$ performing the worst. Note that these three algorithms have different assumptions, so each has its applicability scenario. Lastly, we conduct extensive numerical simulations, which demonstrate that the presented approaches are adequate to achieve the estimation of network assortativity under the demand for privacy protection.
Related papers
- Decentralized Differentially Private Power Method [4.58112062523768]
We propose a novel Decentralized Differentially Private Power Method (D-DP-PM) for performing Principal Component Analysis (PCA) in networked multi-agent settings.<n>Our method ensures $(epsilon,delta)$-Differential Privacy (DP) while enabling collaborative estimation of global eigenvectors across the network.<n> Experiments on real-world datasets demonstrate that D-DP-PM achieves superior privacy-utility tradeoffs compared to naive local DP approaches.
arXiv Detail & Related papers (2025-07-30T17:15:50Z) - Private Networked Federated Learning for Nonsmooth Objectives [7.278228169713637]
This paper develops a networked federated learning algorithm to solve nonsmooth objective functions.
We use the zero-concentrated differential privacy notion (zCDP) to guarantee the confidentiality of the participants.
We provide complete theoretical proof for the privacy guarantees and the algorithm's convergence to the exact solution.
arXiv Detail & Related papers (2023-06-24T16:13:28Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - On the Convergence of Distributed Stochastic Bilevel Optimization
Algorithms over a Network [55.56019538079826]
Bilevel optimization has been applied to a wide variety of machine learning models.
Most existing algorithms restrict their single-machine setting so that they are incapable of handling distributed data.
We develop novel decentralized bilevel optimization algorithms based on a gradient tracking communication mechanism and two different gradients.
arXiv Detail & Related papers (2022-06-30T05:29:52Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - Distributed Online Private Learning of Convex Nondecomposable Objectives [7.5585719185840485]
We deal with a general distributed constrained online learning problem with privacy over time-varying networks.
We propose two algorithms, named as DPSDA-C and DPSDA-PS, under this framework.
Theoretical results show that both algorithms attain an expected regret upper bound in $mathcalO( sqrtT )$ when the objective function is convex.
arXiv Detail & Related papers (2022-06-16T06:29:51Z) - Differentially Private Federated Learning via Inexact ADMM with Multiple
Local Updates [0.0]
We develop a DP inexact alternating direction method of multipliers algorithm with multiple local updates for federated learning.
We show that our algorithm provides $barepsilon$-DP for every iteration, where $barepsilon$ is a privacy budget controlled by the user.
We demonstrate that our algorithm reduces the testing error by at most $31%$ compared with the existing DP algorithm, while achieving the same level of data privacy.
arXiv Detail & Related papers (2022-02-18T19:58:47Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - Differentially Private Federated Learning via Inexact ADMM [0.0]
Differential privacy (DP) techniques can be applied to the federated learning model to protect data privacy against inference attacks.
We develop a DP inexact alternating direction method of multipliers algorithm that solves a sequence of trust-region subproblems.
Our algorithm reduces the testing error by at most $22%$ compared with the existing DP algorithm, while achieving the same level of data privacy.
arXiv Detail & Related papers (2021-06-11T02:28:07Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Private Stochastic Non-Convex Optimization: Adaptive Algorithms and
Tighter Generalization Bounds [72.63031036770425]
We propose differentially private (DP) algorithms for bound non-dimensional optimization.
We demonstrate two popular deep learning methods on the empirical advantages over standard gradient methods.
arXiv Detail & Related papers (2020-06-24T06:01:24Z) - A One-Pass Private Sketch for Most Machine Learning Tasks [48.17461258268463]
Differential privacy (DP) is a compelling privacy definition that explains the privacy-utility tradeoff via formal, provable guarantees.
We propose a private sketch that supports a multitude of machine learning tasks including regression, classification, density estimation, and more.
Our sketch consists of randomized contingency tables that are indexed with locality-sensitive hashing and constructed with an efficient one-pass algorithm.
arXiv Detail & Related papers (2020-06-16T17:47:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.