Differentially Private Covariance Revisited
- URL: http://arxiv.org/abs/2205.14324v2
- Date: Tue, 31 May 2022 14:06:12 GMT
- Title: Differentially Private Covariance Revisited
- Authors: Wei Dong, Yuting Liang, Ke Yi
- Abstract summary: We present three new error bounds for covariance estimation under differential privacy.
The corresponding algorithms are simple and efficient.
Experimental results show that they offer significant improvements over prior work.
- Score: 16.743341747437054
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present three new error bounds, in terms of the Frobenius
norm, for covariance estimation under differential privacy: (1) a worst-case
bound of $\tilde{O}(d^{1/4}/\sqrt{n})$, which improves the standard Gaussian
mechanism $\tilde{O}(d/n)$ for the regime $d>\widetilde{\Omega}(n^{2/3})$; (2)
a trace-sensitive bound that improves the state of the art by a
$\sqrt{d}$-factor, and (3) a tail-sensitive bound that gives a more
instance-specific result. The corresponding algorithms are also simple and
efficient. Experimental results show that they offer significant improvements
over prior work.
Related papers
- Perturb-and-Project: Differentially Private Similarities and Marginals [73.98880839337873]
We revisit the input perturbations framework for differential privacy where noise is added to the input $Ain mathcalS$.
We first design novel efficient algorithms to privately release pair-wise cosine similarities.
We derive a novel algorithm to compute $k$-way marginal queries over $n$ features.
arXiv Detail & Related papers (2024-06-07T12:07:16Z) - Accelerated Variance-Reduced Forward-Reflected Methods for Root-Finding Problems [8.0153031008486]
We propose a novel class of Nesterov's accelerated forward-reflected-based methods with variance reduction to solve root-finding problems.
Our algorithm is single-loop and leverages a new family of unbiased variance-reduced estimators specifically designed for root-finding problems.
arXiv Detail & Related papers (2024-06-04T15:23:29Z) - Efficient Frameworks for Generalized Low-Rank Matrix Bandit Problems [61.85150061213987]
We study the generalized low-rank matrix bandit problem, proposed in citelu2021low under the Generalized Linear Model (GLM) framework.
To overcome the computational infeasibility and theoretical restrain of existing algorithms, we first propose the G-ESTT framework.
We show that G-ESTT can achieve the $tildeO(sqrt(d_1+d_2)3/2Mr3/2T)$ bound of regret while G-ESTS can achineve the $tildeO
arXiv Detail & Related papers (2024-01-14T14:14:19Z) - Near-Optimal Mean Estimation with Unknown, Heteroskedastic Variances [15.990720051907864]
Subset-of-Signals model serves as a benchmark for heteroskedastic mean estimation.
Our algorithm resolves this open question up to logarithmic factors.
Even for $d=2$, our techniques enable rates comparable to knowing the variance of each sample.
arXiv Detail & Related papers (2023-12-05T01:13:10Z) - Better and Simpler Lower Bounds for Differentially Private Statistical
Estimation [7.693388437377614]
We prove that for any $alpha le O(1)$, estimating the covariance of a Gaussian up to spectral error $alpha$ requires $tildeOmegaleft(fracd3/2alpha varepsilon + fracdalpha2right)$ samples.
Next, we prove that estimating the mean of a heavy-tailed distribution with bounded $k$th moments requires $tildeOmegaleft(fracdalphak/(k-1) varepsilon +
arXiv Detail & Related papers (2023-10-10T04:02:43Z) - Stochastic Distributed Optimization under Average Second-order
Similarity: Algorithms and Analysis [36.646876613637325]
We study finite-sum distributed optimization problems involving a master node and $n-1$ local nodes.
We propose two new algorithms, SVRS and AccSVRS, motivated by previous works.
arXiv Detail & Related papers (2023-04-15T08:18:47Z) - Variance-aware robust reinforcement learning with linear function
approximation with heavy-tailed rewards [6.932056534450556]
We present two algorithms, AdaOFUL and VARA, for online sequential decision-making in the presence of heavy-tailed rewards.
AdaOFUL achieves a state-of-the-art regret bound of $widetildemathcalObig.
VarA achieves a tighter variance-aware regret bound of $widetildemathcalO(dsqrtHmathcalG*K)$.
arXiv Detail & Related papers (2023-03-09T22:16:28Z) - Near-Optimal Non-Convex Stochastic Optimization under Generalized
Smoothness [21.865728815935665]
Two recent works established the $O(epsilon-3)$ sample complexity to obtain an $O(epsilon)$-stationary point.
However, both require a large batch size on the order of $mathrmploy(epsilon-1)$, which is not only computationally burdensome but also unsuitable for streaming applications.
In this work, we solve the prior two problems simultaneously by revisiting a simple variant of the STORM algorithm.
arXiv Detail & Related papers (2023-02-13T00:22:28Z) - DIFF2: Differential Private Optimization via Gradient Differences for
Nonconvex Distributed Learning [58.79085525115987]
In the previous work, the best known utility bound is $widetilde O(d2/3/(nvarepsilon_mathrmDP)4/3)$.
We propose a new differential private framework called mphDIFF2 (DIFFerential private via DIFFs) that constructs a differential private framework.
$mphDIFF2 with a global descent achieves the utility of $widetilde O(d2/3/(nvarepsilon_mathrmDP)4/3
arXiv Detail & Related papers (2023-02-08T05:19:01Z) - \~Optimal Differentially Private Learning of Thresholds and
Quasi-Concave Optimization [23.547605262139957]
The problem of learning threshold functions is a fundamental one in machine learning.
We provide a nearly tight upper bound of $tildeO(log* |X|)1.5)$ by Kaplan et al.
We also provide matching upper and lower bounds of $tildeTheta(2log*|X|)$ for the additive error of private quasi-concave (a related and more general problem)
arXiv Detail & Related papers (2022-11-11T18:16:46Z) - Variance-Aware Sparse Linear Bandits [64.70681598741417]
Worst-case minimax regret for sparse linear bandits is $widetildeThetaleft(sqrtdTright)$.
In the benign setting where there is no noise and the action set is the unit sphere, one can use divide-and-conquer to achieve an $widetildemathcal O(1)$ regret.
We develop a general framework that converts any variance-aware linear bandit algorithm to a variance-aware algorithm for sparse linear bandits.
arXiv Detail & Related papers (2022-05-26T15:55:44Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Revisiting Modified Greedy Algorithm for Monotone Submodular
Maximization with a Knapsack Constraint [75.85952446237599]
We show that a modified greedy algorithm can achieve an approximation factor of $0.305$.
We derive a data-dependent upper bound on the optimum.
It can also be used to significantly improve the efficiency of such algorithms as branch and bound.
arXiv Detail & Related papers (2020-08-12T15:40:21Z) - Curse of Dimensionality on Randomized Smoothing for Certifiable
Robustness [151.67113334248464]
We show that extending the smoothing technique to defend against other attack models can be challenging.
We present experimental results on CIFAR to validate our theory.
arXiv Detail & Related papers (2020-02-08T22:02:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.