Near-Optimal Differentially Private Graph Algorithms via the Multidimensional AboveThreshold Mechanism
- URL: http://arxiv.org/abs/2508.02182v1
- Date: Mon, 04 Aug 2025 08:26:58 GMT
- Title: Near-Optimal Differentially Private Graph Algorithms via the Multidimensional AboveThreshold Mechanism
- Authors: Laxman Dhulipala, Monika Henzinger, George Z. Li, Quanquan C. Liu, A. R. Sricharan, Leqi Zhu,
- Abstract summary: We formulate a novel generalized sparse vector technique which generalizes SVT to vectors with multiple dimensions.<n>As an application, we solve a number of important graph problems with better bounds than previous work.<n>We give a tight local edge DP algorithm for $k$-core decomposition with $O(epsilon-1log n)$ additive error and no multiplicative error in $O(n)$ rounds.
- Score: 5.4971134662997025
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Many differentially private and classical non-private graph algorithms rely crucially on determining whether some property of each vertex meets a threshold. For example, for the $k$-core decomposition problem, the classic peeling algorithm iteratively removes a vertex if its induced degree falls below a threshold. The sparse vector technique (SVT) is generally used to transform non-private threshold queries into private ones with only a small additive loss in accuracy. However, a naive application of SVT in the graph setting leads to an amplification of the error by a factor of $n$ due to composition, as SVT is applied to every vertex. In this paper, we resolve this problem by formulating a novel generalized sparse vector technique which we call the Multidimensional AboveThreshold (MAT) Mechanism which generalizes SVT (applied to vectors with one dimension) to vectors with multiple dimensions. As an application, we solve a number of important graph problems with better bounds than previous work. We apply our MAT mechanism to obtain a set of improved bounds for a variety of problems including $k$-core decomposition, densest subgraph, low out-degree ordering, and vertex coloring. We give a tight local edge DP algorithm for $k$-core decomposition with $O(\epsilon^{-1}\log n)$ additive error and no multiplicative error in $O(n)$ rounds. We also give a new $(2+\eta)$-factor multiplicative, $O(\epsilon^{-1}\log n)$ additive error algorithm in $O(\log^2 n)$ rounds for any constant $\eta > 0$. Both of these results are asymptotically tight against our new lower bound of $\Omega(\log n)$ for any constant-factor approximation algorithm for $k$-core decomposition. Our new algorithms for $k$-core also directly lead to new algorithms for densest subgraph and low out-degree ordering. Our novel private defective coloring algorithms uses number of colors proportional to the arboricity of the graph.
Related papers
- Robust random graph matching in Gaussian models via vector approximate message passing [0.0]
We propose an efficient random graph matching type algorithm that is robust under any adversarial perturbations of $n1-o(1)$ size.<n>Our algorithm is the first efficient random graph matching type algorithm that is robust under any adversarial perturbations of $n1-o(1)$ size.
arXiv Detail & Related papers (2024-12-21T03:15:38Z) - Inverting the Leverage Score Gradient: An Efficient Approximate Newton Method [10.742859956268655]
This paper aims to recover the intrinsic model parameters given the leverage scores gradient.
We specifically scrutinize the inversion of the leverage score gradient, denoted as $g(x)$.
arXiv Detail & Related papers (2024-08-21T01:39:42Z) - Near-Optimal Differentially Private k-Core Decomposition [2.859324824091086]
We show that an $eps$-edge differentially private algorithm for $k$-core decomposition outputs the core numbers with no multiplicative error and $O(textlog(n)/eps)$ additive error.
This improves upon previous work by a factor of 2 in the multiplicative error, while giving near-optimal additive error.
arXiv Detail & Related papers (2023-12-12T20:09:07Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Fast Maximum $k$-Plex Algorithms Parameterized by Small Degeneracy Gaps [30.06993761032835]
The maximum $k$-plex problem is important but computationally challenging in applications such as graph mining and community detection.
We present an exact algorithm parameterized by $g_k(G)$, which has the worst-case running time in the size of the input graph and exponential in $g_k(G)$.
We further extend our discussion to an even smaller parameter $cg_k(G)$, the gap between the community-degeneracy bound and the size of the maximum $k$-plex.
arXiv Detail & Related papers (2023-06-23T01:28:24Z) - Sketching Algorithms and Lower Bounds for Ridge Regression [65.0720777731368]
We give a sketching-based iterative algorithm that computes $1+varepsilon$ approximate solutions for the ridge regression problem.
We also show that this algorithm can be used to give faster algorithms for kernel ridge regression.
arXiv Detail & Related papers (2022-04-13T22:18:47Z) - Fast Computation of Generalized Eigenvectors for Manifold Graph
Embedding [38.902986549367434]
We leverage existing fast extreme eigenvector computation algorithms for speedy execution.
Our embedding is among the fastest in the literature, while producing the best clustering performance for manifold graphs.
arXiv Detail & Related papers (2021-12-15T03:45:39Z) - Gaussian Process Bandit Optimization with Few Batches [49.896920704012395]
We introduce a batch algorithm inspired by finite-arm bandit algorithms.
We show that it achieves the cumulative regret upper bound $Oast(sqrtTgamma_T)$ using $O(loglog T)$ batches within time horizon $T$.
In addition, we propose a modified version of our algorithm, and characterize how the regret is impacted by the number of batches.
arXiv Detail & Related papers (2021-10-15T00:54:04Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Hybrid Stochastic-Deterministic Minibatch Proximal Gradient:
Less-Than-Single-Pass Optimization with Nearly Optimal Generalization [83.80460802169999]
We show that HSDMPG can attain an $mathcalObig (1/sttnbig)$ which is at the order of excess error on a learning model.
For loss factors, we prove that HSDMPG can attain an $mathcalObig (1/sttnbig)$ which is at the order of excess error on a learning model.
arXiv Detail & Related papers (2020-09-18T02:18:44Z) - Gradient Free Minimax Optimization: Variance Reduction and Faster
Convergence [120.9336529957224]
In this paper, we denote the non-strongly setting on the magnitude of a gradient-free minimax optimization problem.
We show that a novel zeroth-order variance reduced descent algorithm achieves the best known query complexity.
arXiv Detail & Related papers (2020-06-16T17:55:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.