Constant matters: Fine-grained Complexity of Differentially Private
Continual Observation
- URL: http://arxiv.org/abs/2202.11205v6
- Date: Mon, 5 Feb 2024 12:37:26 GMT
- Title: Constant matters: Fine-grained Complexity of Differentially Private
Continual Observation
- Authors: Hendrik Fichtenberger, Monika Henzinger and Jalaj Upadhyay
- Abstract summary: We study fine-grained error bounds for differentially private algorithms for counting continual observation.
We are the first to give concrete error bounds for various problems under continual observation.
- Score: 10.624505781812385
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study fine-grained error bounds for differentially private algorithms for
counting under continual observation. Our main insight is that the matrix
mechanism when using lower-triangular matrices can be used in the continual
observation model. More specifically, we give an explicit factorization for the
counting matrix $M_\mathsf{count}$ and upper bound the error explicitly. We
also give a fine-grained analysis, specifying the exact constant in the upper
bound. Our analysis is based on upper and lower bounds of the {\em completely
bounded norm} (cb-norm) of $M_\mathsf{count}$. Along the way, we improve the
best-known bound of 28 years by Mathias (SIAM Journal on Matrix Analysis and
Applications, 1993) on the cb-norm of $M_\mathsf{count}$ for a large range of
the dimension of $M_\mathsf{count}$. Furthermore, we are the first to give
concrete error bounds for various problems under continual observation such as
binary counting, maintaining a histogram, releasing an approximately
cut-preserving synthetic graph, many graph-based statistics, and substring and
episode counting. Finally, we note that our result can be used to get a
fine-grained error bound for non-interactive local learning {and the first
lower bounds on the additive error for
$(\epsilon,\delta)$-differentially-private counting under continual
observation.} Subsequent to this work, Henzinger et al. (SODA2023) showed that
our factorization also achieves fine-grained mean-squared error.
Related papers
- Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - Provably learning a multi-head attention layer [55.2904547651831]
Multi-head attention layer is one of the key components of the transformer architecture that sets it apart from traditional feed-forward models.
In this work, we initiate the study of provably learning a multi-head attention layer from random examples.
We prove computational lower bounds showing that in the worst case, exponential dependence on $m$ is unavoidable.
arXiv Detail & Related papers (2024-02-06T15:39:09Z) - A Unifying Framework for Differentially Private Sums under Continual
Observation [20.432568247732206]
We study the problem of maintaining a differentially private decaying sum under continual observation.
We give a unifying framework and an efficient algorithm for this problem for emphany sufficiently smoothangular function.
arXiv Detail & Related papers (2023-07-18T05:04:11Z) - Differential Privacy for Clustering Under Continual Observation [5.220940151628734]
We consider the problem of clustering privately a dataset in $mathbbRd$ that undergoes both insertion and deletion of points.
We give an $varepsilon$-differentially private clustering mechanism for the $k$-means objective under continual observation.
arXiv Detail & Related papers (2023-07-07T07:23:56Z) - Pseudonorm Approachability and Applications to Regret Minimization [73.54127663296906]
We convert high-dimensional $ell_infty$-approachability problems to low-dimensional pseudonorm approachability problems.
We develop an algorithmic theory of pseudonorm approachability, analogous to previous work on approachability for $ell$ and other norms.
arXiv Detail & Related papers (2023-02-03T03:19:14Z) - Almost Tight Error Bounds on Differentially Private Continual Counting [11.549143183739577]
First large-scale deployment of private federated learning uses differentially private counting in the continual release model as a subroutine.
Standard mechanism for continual counting is the binary mechanism.
We present a novel mechanism and show that its mean squared error is bothally optimal and a factor 10 smaller than the error of the binary mechanism.
arXiv Detail & Related papers (2022-11-09T16:35:42Z) - Finite-time High-probability Bounds for Polyak-Ruppert Averaged Iterates
of Linear Stochastic Approximation [22.51165277694864]
This paper provides a finite-time analysis of linear approximation (LSA) algorithms with fixed step size.
LSA is used to compute approximate solutions of a $d$-dimensional linear system.
arXiv Detail & Related papers (2022-07-10T14:36:04Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Tight High Probability Bounds for Linear Stochastic Approximation with
Fixed Stepsize [41.38162218102825]
This paper provides a non-asymptotic analysis of linear approximation (LSA) algorithms with fixed stepsize.
We derive high probability bounds on the performance of LSA under weaker conditions on the sequence $(bf A_n, bf b_n): n in mathbbN*$.
We show that our conclusions cannot be improved without additional assumptions on the sequence $bf A_n: n in mathbbN*$.
arXiv Detail & Related papers (2021-06-02T16:10:37Z) - High-Dimensional Gaussian Process Inference with Derivatives [90.8033626920884]
We show that in the low-data regime $ND$, the Gram matrix can be decomposed in a manner that reduces the cost of inference to $mathcalO(N2D + (N2)3)$.
We demonstrate this potential in a variety of tasks relevant for machine learning, such as optimization and Hamiltonian Monte Carlo with predictive gradients.
arXiv Detail & Related papers (2021-02-15T13:24:41Z) - Linear-Sample Learning of Low-Rank Distributions [56.59844655107251]
We show that learning $ktimes k$, rank-$r$, matrices to normalized $L_1$ distance requires $Omega(frackrepsilon2)$ samples.
We propose an algorithm that uses $cal O(frackrepsilon2log2fracepsilon)$ samples, a number linear in the high dimension, and nearly linear in the matrices, typically low, rank proofs.
arXiv Detail & Related papers (2020-09-30T19:10:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.