Streaming Private Continual Counting via Binning
- URL: http://arxiv.org/abs/2412.07093v1
- Date: Tue, 10 Dec 2024 01:21:56 GMT
- Title: Streaming Private Continual Counting via Binning
- Authors: Joel Daniel Andersson, Rasmus Pagh,
- Abstract summary: We present a simple approach to approximating factorization mechanisms in low space via $textitbinning$.
We show empirically that even with very low space usage we are able to closely match, and sometimes surpass, the performance of optimal factorization mechanisms.
- Score: 11.72102598708538
- License:
- Abstract: In differential privacy, $\textit{continual observation}$ refers to problems in which we wish to continuously release a function of a dataset that is revealed one element at a time. The challenge is to maintain a good approximation while keeping the combined output over all time steps differentially private. In the special case of $\textit{continual counting}$ we seek to approximate a sum of binary input elements. This problem has received considerable attention lately, in part due to its relevance in implementations of differentially private stochastic gradient descent. $\textit{Factorization mechanisms}$ are the leading approach to continual counting, but the best such mechanisms do not work well in $\textit{streaming}$ settings since they require space proportional to the size of the input. In this paper, we present a simple approach to approximating factorization mechanisms in low space via $\textit{binning}$, where adjacent matrix entries with similar values are changed to be identical in such a way that a matrix-vector product can be maintained in sublinear space. Our approach has provable sublinear space guarantees for a class of lower triangular matrices whose entries are monotonically decreasing away from the diagonal. We show empirically that even with very low space usage we are able to closely match, and sometimes surpass, the performance of asymptotically optimal factorization mechanisms. Recently, and independently of our work, Dvijotham et al. have also suggested an approach to implementing factorization mechanisms in a streaming setting. Their work differs from ours in several respects: It only addresses factorization into $\textit{Toeplitz}$ matrices, only considers $\textit{maximum}$ error, and uses a different technique based on rational function approximation that seems less versatile than our binning approach.
Related papers
- Optimal Oblivious Subspace Embeddings with Near-optimal Sparsity [3.9657575162895196]
An oblivious subspace embedding is a random $mtimes n$ matrix $Pi$ that preserves the norms of all vectors in that subspace within a $1pmepsilon$ factor.
We give an oblivious subspace embedding with the optimal dimension $m=Theta(d/epsilon2)$ that has a near-optimal sparsity of $tilde O (1/epsilon)$ non-zero entries per column of $Pi$.
arXiv Detail & Related papers (2024-11-13T16:58:51Z) - Perturb-and-Project: Differentially Private Similarities and Marginals [73.98880839337873]
We revisit the input perturbations framework for differential privacy where noise is added to the input $Ain mathcalS$.
We first design novel efficient algorithms to privately release pair-wise cosine similarities.
We derive a novel algorithm to compute $k$-way marginal queries over $n$ features.
arXiv Detail & Related papers (2024-06-07T12:07:16Z) - How Sparse Attention Approximates Exact Attention? Your Attention is Naturally $n^C$-Sparse [9.552839922307587]
Sparse Attention is a technique that approximates standard attention computation with sub-quadratic complexity.
Variations of this technique, such as pruning KV cache, sparsity-based fast attention, and Sparse Transformer, have been extensively utilized for efficient Large Language Models (LLMs) deployment.
arXiv Detail & Related papers (2024-04-03T12:37:34Z) - A Novel and Optimal Spectral Method for Permutation Synchronization [8.517406772939292]
Permutation synchronization is an important problem in computer science that constitutes the key step of many computer vision tasks.
In this paper, we propose a novel and statistically optimal spectral algorithm.
arXiv Detail & Related papers (2023-03-21T17:43:26Z) - Pseudonorm Approachability and Applications to Regret Minimization [73.54127663296906]
We convert high-dimensional $ell_infty$-approachability problems to low-dimensional pseudonorm approachability problems.
We develop an algorithmic theory of pseudonorm approachability, analogous to previous work on approachability for $ell$ and other norms.
arXiv Detail & Related papers (2023-02-03T03:19:14Z) - Majorization-minimization for Sparse Nonnegative Matrix Factorization
with the $\beta$-divergence [2.3787352248749376]
It is well known that the norm of the other factor (the dictionary matrix) needs to be controlled in order to avoid an ill-posed formulation.
Standard practice consists in constraining the columns of the dictionary to have unit norm, which leads to a nontrivial optimization problem.
We derive block-descent majorization-minimization algorithms that result in simple multiplicative updates for either $ell_1$-regularization or the more "aggressive" log-regularization.
arXiv Detail & Related papers (2022-07-13T16:09:29Z) - Scalable Differentially Private Clustering via Hierarchically Separated
Trees [82.69664595378869]
We show that our method computes a solution with cost at most $O(d3/2log n)cdot OPT + O(k d2 log2 n / epsilon2)$, where $epsilon$ is the privacy guarantee.
Although the worst-case guarantee is worse than that of state of the art private clustering methods, the algorithm we propose is practical.
arXiv Detail & Related papers (2022-06-17T09:24:41Z) - Sparse Quadratic Optimisation over the Stiefel Manifold with Application
to Permutation Synchronisation [71.27989298860481]
We address the non- optimisation problem of finding a matrix on the Stiefel manifold that maximises a quadratic objective function.
We propose a simple yet effective sparsity-promoting algorithm for finding the dominant eigenspace matrix.
arXiv Detail & Related papers (2021-09-30T19:17:35Z) - Clustering Mixture Models in Almost-Linear Time via List-Decodable Mean
Estimation [58.24280149662003]
We study the problem of list-decodable mean estimation, where an adversary can corrupt a majority of the dataset.
We develop new algorithms for list-decodable mean estimation, achieving nearly-optimal statistical guarantees.
arXiv Detail & Related papers (2021-06-16T03:34:14Z) - On the Expressive Power of Self-Attention Matrices [41.72598286888797]
We show that a fixed self-attention module can approximate arbitrary sparse patterns depending on the input.
We propose an algorithm for finding adaptive inputs and fixed self-attention parameters in order to approximate a given matrix.
arXiv Detail & Related papers (2021-06-07T16:30:28Z) - Hybrid Trilinear and Bilinear Programming for Aligning Partially
Overlapping Point Sets [85.71360365315128]
In many applications, we need algorithms which can align partially overlapping point sets are invariant to the corresponding corresponding RPM algorithm.
We first show that the objective is a cubic bound function. We then utilize the convex envelopes of trilinear and bilinear monomial transformations to derive its lower bound.
We next develop a branch-and-bound (BnB) algorithm which only branches over the transformation variables and runs efficiently.
arXiv Detail & Related papers (2021-01-19T04:24:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.