Spectral Graph Clustering under Differential Privacy: Balancing Privacy, Accuracy, and Efficiency
- URL: http://arxiv.org/abs/2510.07136v1
- Date: Wed, 08 Oct 2025 15:30:27 GMT
- Title: Spectral Graph Clustering under Differential Privacy: Balancing Privacy, Accuracy, and Efficiency
- Authors: Mohamed Seif, Antti Koskela, H. Vincent Poor, Andrea J. Goldsmith,
- Abstract summary: We study the problem of spectral graph clustering under edge differential privacy (DP)<n>Specifically, we develop three mechanisms: (i) graph perturbation via randomized edge flipping combined with adjacency matrix shuffling, which enforces edge privacy; (ii) private graph projection with additive Gaussian noise in a lower-dimensional space to reduce dimensionality and computational complexity; and (iii) a noisy power iteration method that distributes Gaussian noise across iterations to ensure edge DP while maintaining convergence.
- Score: 53.98433419539793
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of spectral graph clustering under edge differential privacy (DP). Specifically, we develop three mechanisms: (i) graph perturbation via randomized edge flipping combined with adjacency matrix shuffling, which enforces edge privacy while preserving key spectral properties of the graph. Importantly, shuffling considerably amplifies the guarantees: whereas flipping edges with a fixed probability alone provides only a constant epsilon edge DP guarantee as the number of nodes grows, the shuffled mechanism achieves (epsilon, delta) edge DP with parameters that tend to zero as the number of nodes increase; (ii) private graph projection with additive Gaussian noise in a lower-dimensional space to reduce dimensionality and computational complexity; and (iii) a noisy power iteration method that distributes Gaussian noise across iterations to ensure edge DP while maintaining convergence. Our analysis provides rigorous privacy guarantees and a precise characterization of the misclassification error rate. Experiments on synthetic and real-world networks validate our theoretical analysis and illustrate the practical privacy-utility trade-offs.
Related papers
- Point Cloud Denoising With Fine-Granularity Dynamic Graph Convolutional Networks [58.050130177241186]
Noise perturbations often corrupt 3-D point clouds, hindering downstream tasks such as surface reconstruction, rendering, and further processing.
This paper introduces finegranularity dynamic graph convolutional networks called GDGCN, a novel approach to denoising in 3-D point clouds.
arXiv Detail & Related papers (2024-11-21T14:19:32Z) - Privacy of SGD under Gaussian or Heavy-Tailed Noise: Guarantees without Gradient Clipping [15.348717323408652]
We bridge the gap between theoretical and empirical benefits of heavy-tailed noise for optimization and privacy preservation.<n>We show that heavy-tailed noising schemes can be a viable alternative to their light-tailed counterparts.
arXiv Detail & Related papers (2024-03-04T13:53:41Z) - Shifted Interpolation for Differential Privacy [6.1836947007564085]
Noisy gradient descent and its variants are the predominant algorithms for differentially private machine learning.
This paper establishes the "privacy amplification by corollary" phenomenon in the unifying framework of $f$-differential privacy.
Notably, this leads to the first exact privacy analysis in the foundational setting of strongly convex optimization.
arXiv Detail & Related papers (2024-03-01T04:50:04Z) - On the Privacy of Selection Mechanisms with Gaussian Noise [44.577599546904736]
We revisit the analysis of Report Noisy Max and Above Threshold with Gaussian noise.
We find that it is possible to provide pure ex-ante DP bounds for Report Noisy Max and pure ex-post DP bounds for Above Threshold.
arXiv Detail & Related papers (2024-02-09T02:11:25Z) - Differentially Private SGD Without Clipping Bias: An Error-Feedback Approach [62.000948039914135]
Using Differentially Private Gradient Descent with Gradient Clipping (DPSGD-GC) to ensure Differential Privacy (DP) comes at the cost of model performance degradation.
We propose a new error-feedback (EF) DP algorithm as an alternative to DPSGD-GC.
We establish an algorithm-specific DP analysis for our proposed algorithm, providing privacy guarantees based on R'enyi DP.
arXiv Detail & Related papers (2023-11-24T17:56:44Z) - Differential Privacy with Higher Utility by Exploiting Coordinate-wise Disparity: Laplace Mechanism Can Beat Gaussian in High Dimensions [9.20186865054847]
In a differentially private additive noise mechanism, independent and identically distributed (i.i.d.) noise samples are added to each coordinate of the response.<n>We study the i.n.i.d. Gaussian and Laplace mechanisms and obtain the conditions under which these mechanisms guarantee privacy.
arXiv Detail & Related papers (2023-02-07T14:54:20Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - Degree-Preserving Randomized Response for Graph Neural Networks under Local Differential Privacy [8.12606646175019]
We propose a novel LDP algorithm called the DPRR (Degree-Preserving Randomized Response) to provide LDP for edges in GNNs.
Our DPRR preserves each user's degree hence a graph structure while providing edge LDP.
We focus on graph classification as a task of GNNs and evaluate the DPRR using three social graph datasets.
arXiv Detail & Related papers (2022-02-21T13:35:03Z) - Partial recovery and weak consistency in the non-uniform hypergraph Stochastic Block Model [6.681901523019242]
We consider the community detection problem in random hypergraphs under the nonuniform hypergraph block model (HSBM)
We provide a spectral algorithm that outputs a partition with at least a $gamma$ fraction of the vertices classified correctly, where $gammain depends on the signal-to-noise ratio (SNR) of the model.
The theoretical analysis of our algorithm relies on the concentration and regularization of the adjacency matrix for non-uniform random hypergraphs, which can be of independent interest.
arXiv Detail & Related papers (2021-12-22T05:38:33Z) - Smoothed Differential Privacy [55.415581832037084]
Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.
In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis.
We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP.
arXiv Detail & Related papers (2021-07-04T06:55:45Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.