Private Robust Estimation by Stabilizing Convex Relaxations
- URL: http://arxiv.org/abs/2112.03548v1
- Date: Tue, 7 Dec 2021 07:47:37 GMT
- Title: Private Robust Estimation by Stabilizing Convex Relaxations
- Authors: Pravesh K. Kothari, Pasin Manurangsi, Ameya Velingker
- Abstract summary: $(epsilon, delta)$-differentially private (DP)
$(epsilon, delta)$-differentially private (DP)
$(epsilon, delta)$-differentially private (DP)
- Score: 22.513117502159922
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We give the first polynomial time and sample $(\epsilon,
\delta)$-differentially private (DP) algorithm to estimate the mean, covariance
and higher moments in the presence of a constant fraction of adversarial
outliers. Our algorithm succeeds for families of distributions that satisfy two
well-studied properties in prior works on robust estimation: certifiable
subgaussianity of directional moments and certifiable hypercontractivity of
degree 2 polynomials. Our recovery guarantees hold in the "right
affine-invariant norms": Mahalanobis distance for mean, multiplicative spectral
and relative Frobenius distance guarantees for covariance and injective norms
for higher moments. Prior works obtained private robust algorithms for mean
estimation of subgaussian distributions with bounded covariance. For covariance
estimation, ours is the first efficient algorithm (even in the absence of
outliers) that succeeds without any condition-number assumptions.
Our algorithms arise from a new framework that provides a general blueprint
for modifying convex relaxations for robust estimation to satisfy strong
worst-case stability guarantees in the appropriate parameter norms whenever the
algorithms produce witnesses of correctness in their run. We verify such
guarantees for a modification of standard sum-of-squares (SoS) semidefinite
programming relaxations for robust estimation. Our privacy guarantees are
obtained by combining stability guarantees with a new "estimate dependent"
noise injection mechanism in which noise scales with the eigenvalues of the
estimated covariance. We believe this framework will be useful more generally
in obtaining DP counterparts of robust estimators.
Independently of our work, Ashtiani and Liaw [AL21] also obtained a
polynomial time and sample private robust estimation algorithm for Gaussian
distributions.
Related papers
- An Inexact Halpern Iteration with Application to Distributionally Robust
Optimization [9.529117276663431]
We investigate the inexact variants of the scheme in both deterministic and deterministic convergence settings.
We show that by choosing the inexactness appropriately, the inexact schemes admit an $O(k-1) convergence rate in terms of the (expected) residue norm.
arXiv Detail & Related papers (2024-02-08T20:12:47Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Privately Estimating a Gaussian: Efficient, Robust and Optimal [6.901744415870126]
We give efficient algorithms for privately estimating a Gaussian distribution in both pure and approximate differential privacy (DP) models.
In the pure DP setting, we give an efficient algorithm that estimates an unknown $d$-dimensional Gaussian distribution up to an arbitrary tiny total variation error.
For the special case of mean estimation, our algorithm achieves the optimal sample complexity of $widetilde O(d)$, improving on a $widetilde O(d1.5)$ bound from prior work.
arXiv Detail & Related papers (2022-12-15T18:27:39Z) - Fully Stochastic Trust-Region Sequential Quadratic Programming for
Equality-Constrained Optimization Problems [62.83783246648714]
We propose a sequential quadratic programming algorithm (TR-StoSQP) to solve nonlinear optimization problems with objectives and deterministic equality constraints.
The algorithm adaptively selects the trust-region radius and, compared to the existing line-search StoSQP schemes, allows us to utilize indefinite Hessian matrices.
arXiv Detail & Related papers (2022-11-29T05:52:17Z) - Boosting the Confidence of Generalization for $L_2$-Stable Randomized
Learning Algorithms [41.082982732100696]
We show that a properly designed subbagging process leads to near-tight exponential generalization bounds over both data and algorithm.
We further derive generic results to improve high-probability generalization bounds for convex or non-tight problems with natural decaying learning rates.
arXiv Detail & Related papers (2022-06-08T12:14:01Z) - Optimal Algorithms for Mean Estimation under Local Differential Privacy [55.32262879188817]
We show that PrivUnit achieves the optimal variance among a large family of locally private randomizers.
We also develop a new variant of PrivUnit based on the Gaussian distribution which is more amenable to mathematical analysis and enjoys the same optimality guarantees.
arXiv Detail & Related papers (2022-05-05T06:43:46Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Differentially private inference via noisy optimization [3.015622397986615]
We show that robust statistics can be used in conjunction with noisy gradient descent or noisy Newton methods to obtain optimal private estimators.
We demonstrate the effectiveness of a bias correction that leads to enhanced small-sample empirical performance in simulations.
arXiv Detail & Related papers (2021-03-19T19:55:55Z) - Outlier Robust Mean Estimation with Subgaussian Rates via Stability [46.03021473600576]
We study the problem of robust outlier high-dimensional mean estimation.
We obtain first computationally efficient rate with subgaussian for outlier-robust mean estimation.
arXiv Detail & Related papers (2020-07-30T17:33:03Z) - Distributional Robustness and Regularization in Reinforcement Learning [62.23012916708608]
We introduce a new regularizer for empirical value functions and show that it lower bounds the Wasserstein distributionally robust value function.
It suggests using regularization as a practical tool for dealing with $textitexternal uncertainty$ in reinforcement learning.
arXiv Detail & Related papers (2020-03-05T19:56:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.