Almost Tight Error Bounds on Differentially Private Continual Counting
- URL: http://arxiv.org/abs/2211.05006v2
- Date: Mon, 5 Feb 2024 13:00:51 GMT
- Title: Almost Tight Error Bounds on Differentially Private Continual Counting
- Authors: Monika Henzinger and Jalaj Upadhyay and Sarvagya Upadhyay
- Abstract summary: First large-scale deployment of private federated learning uses differentially private counting in the continual release model as a subroutine.
Standard mechanism for continual counting is the binary mechanism.
We present a novel mechanism and show that its mean squared error is bothally optimal and a factor 10 smaller than the error of the binary mechanism.
- Score: 11.549143183739577
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The first large-scale deployment of private federated learning uses
differentially private counting in the continual release model as a subroutine
(Google AI blog titled "Federated Learning with Formal Differential Privacy
Guarantees"). In this case, a concrete bound on the error is very relevant to
reduce the privacy parameter. The standard mechanism for continual counting is
the binary mechanism. We present a novel mechanism and show that its mean
squared error is both asymptotically optimal and a factor 10 smaller than the
error of the binary mechanism. We also show that the constants in our analysis
are almost tight by giving non-asymptotic lower and upper bounds that differ
only in the constants of lower-order terms. Our algorithm is a matrix mechanism
for the counting matrix and takes constant time per release. We also use our
explicit factorization of the counting matrix to give an upper bound on the
excess risk of the private learning algorithm of Denisov et al. (NeurIPS 2022).
Our lower bound for any continual counting mechanism is the first tight lower
bound on continual counting under approximate differential privacy. It is
achieved using a new lower bound on a certain factorization norm, denoted by
$\gamma_F(\cdot)$, in terms of the singular values of the matrix. In
particular, we show that for any complex matrix, $A \in \mathbb{C}^{m \times
n}$, \[ \gamma_F(A) \geq \frac{1}{\sqrt{m}}\|A\|_1, \] where $\|\cdot \|$
denotes the Schatten-1 norm.
We believe this technique will be useful in proving lower bounds for a larger
class of linear queries. To illustrate the power of this technique, we show the
first lower bound on the mean squared error for answering parity queries.
Related papers
- Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - Adversarial Contextual Bandits Go Kernelized [21.007410990554522]
We study a generalization of the problem of online learning in adversarial linear contextual bandits by incorporating loss functions that belong to a Hilbert kernel space.
We propose a new optimistically biased estimator for the loss functions and reproducing near-optimal regret guarantees.
arXiv Detail & Related papers (2023-10-02T19:59:39Z) - A Unifying Framework for Differentially Private Sums under Continual
Observation [20.432568247732206]
We study the problem of maintaining a differentially private decaying sum under continual observation.
We give a unifying framework and an efficient algorithm for this problem for emphany sufficiently smoothangular function.
arXiv Detail & Related papers (2023-07-18T05:04:11Z) - Pseudonorm Approachability and Applications to Regret Minimization [73.54127663296906]
We convert high-dimensional $ell_infty$-approachability problems to low-dimensional pseudonorm approachability problems.
We develop an algorithmic theory of pseudonorm approachability, analogous to previous work on approachability for $ell$ and other norms.
arXiv Detail & Related papers (2023-02-03T03:19:14Z) - Differentially-Private Hierarchical Clustering with Provable
Approximation Guarantees [79.59010418610625]
We study differentially private approximation algorithms for hierarchical clustering.
We show strong lower bounds for the problem: that any $epsilon$-DP algorithm must exhibit $O(|V|2/ epsilon)$-additive error for an input dataset.
We propose a private $1+o(1)$ approximation algorithm which also recovers the blocks exactly.
arXiv Detail & Related papers (2023-01-31T19:14:30Z) - Majorization-minimization for Sparse Nonnegative Matrix Factorization
with the $\beta$-divergence [2.3787352248749376]
It is well known that the norm of the other factor (the dictionary matrix) needs to be controlled in order to avoid an ill-posed formulation.
Standard practice consists in constraining the columns of the dictionary to have unit norm, which leads to a nontrivial optimization problem.
We derive block-descent majorization-minimization algorithms that result in simple multiplicative updates for either $ell_1$-regularization or the more "aggressive" log-regularization.
arXiv Detail & Related papers (2022-07-13T16:09:29Z) - Variance-Aware Sparse Linear Bandits [64.70681598741417]
Worst-case minimax regret for sparse linear bandits is $widetildeThetaleft(sqrtdTright)$.
In the benign setting where there is no noise and the action set is the unit sphere, one can use divide-and-conquer to achieve an $widetildemathcal O(1)$ regret.
We develop a general framework that converts any variance-aware linear bandit algorithm to a variance-aware algorithm for sparse linear bandits.
arXiv Detail & Related papers (2022-05-26T15:55:44Z) - New Lower Bounds for Private Estimation and a Generalized Fingerprinting
Lemma [10.176795938619417]
We prove new lower bounds for statistical estimation tasks under the constraint of $(varepsilon, delta)$-differential privacy.
We show that estimating the Frobenius norm requires $Omega(d2)$ samples, and in spectral norm requires $Omega(d3/2)$ samples, both matching upper bounds up to logarithmic factors.
arXiv Detail & Related papers (2022-05-17T17:55:10Z) - Constant matters: Fine-grained Complexity of Differentially Private
Continual Observation [10.624505781812385]
We study fine-grained error bounds for differentially private algorithms for counting continual observation.
We are the first to give concrete error bounds for various problems under continual observation.
arXiv Detail & Related papers (2022-02-23T11:50:20Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Linear-Sample Learning of Low-Rank Distributions [56.59844655107251]
We show that learning $ktimes k$, rank-$r$, matrices to normalized $L_1$ distance requires $Omega(frackrepsilon2)$ samples.
We propose an algorithm that uses $cal O(frackrepsilon2log2fracepsilon)$ samples, a number linear in the high dimension, and nearly linear in the matrices, typically low, rank proofs.
arXiv Detail & Related papers (2020-09-30T19:10:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.