From Robustness to Privacy and Back
- URL: http://arxiv.org/abs/2302.01855v1
- Date: Fri, 3 Feb 2023 16:57:57 GMT
- Title: From Robustness to Privacy and Back
- Authors: Hilal Asi, Jonathan Ullman, Lydia Zakynthinou
- Abstract summary: We study the relationship between two desiderata of algorithms in statistical inference and machine learning: differential privacy and robustness to adversarial data corruptions.
Our work gives the first black-box transformation that converts any adversarially robust algorithm into one that satisfies pure differential privacy.
- Score: 15.727276506140878
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the relationship between two desiderata of algorithms in statistical
inference and machine learning: differential privacy and robustness to
adversarial data corruptions. Their conceptual similarity was first observed by
Dwork and Lei (STOC 2009), who observed that private algorithms satisfy
robustness, and gave a general method for converting robust algorithms to
private ones. However, all general methods for transforming robust algorithms
into private ones lead to suboptimal error rates. Our work gives the first
black-box transformation that converts any adversarially robust algorithm into
one that satisfies pure differential privacy. Moreover, we show that for any
low-dimensional estimation task, applying our transformation to an optimal
robust estimator results in an optimal private estimator. Thus, we conclude
that for any low-dimensional task, the optimal error rate for
$\varepsilon$-differentially private estimators is essentially the same as the
optimal error rate for estimators that are robust to adversarially corrupting
$1/\varepsilon$ training samples. We apply our transformation to obtain new
optimal private estimators for several high-dimensional tasks, including
Gaussian (sparse) linear regression and PCA. Finally, we present an extension
of our transformation that leads to approximate differentially private
algorithms whose error does not depend on the range of the output space, which
is impossible under pure differential privacy.
Related papers
- Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - Private Networked Federated Learning for Nonsmooth Objectives [7.278228169713637]
This paper develops a networked federated learning algorithm to solve nonsmooth objective functions.
We use the zero-concentrated differential privacy notion (zCDP) to guarantee the confidentiality of the participants.
We provide complete theoretical proof for the privacy guarantees and the algorithm's convergence to the exact solution.
arXiv Detail & Related papers (2023-06-24T16:13:28Z) - Differentially Private Domain Adaptation with Theoretical Guarantees [46.37771025567305]
In many applications, the labeled data at the labeled data's disposal is subject to privacy constraints and is relatively limited.
This is the modern problem of supervised domain adaptation from a public source to a private target domain.
We make use of a general learner to benefit from favorable theoretical learning guarantees.
arXiv Detail & Related papers (2023-06-15T04:03:06Z) - Robustness Implies Privacy in Statistical Estimation [16.061651295129302]
We study the relationship between adversarial and differential privacy in high-dimensional statistics.
We give the first blackbox reduction from privacy to robustness which can produce private estimators with optimal tradeoffs.
Our algorithms are also robust to a nearly optimal fraction of adversarially-corrupted samples.
arXiv Detail & Related papers (2022-12-09T18:07:30Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Optimal Algorithms for Mean Estimation under Local Differential Privacy [55.32262879188817]
We show that PrivUnit achieves the optimal variance among a large family of locally private randomizers.
We also develop a new variant of PrivUnit based on the Gaussian distribution which is more amenable to mathematical analysis and enjoys the same optimality guarantees.
arXiv Detail & Related papers (2022-05-05T06:43:46Z) - No-Regret Algorithms for Private Gaussian Process Bandit Optimization [13.660643701487002]
We consider the ubiquitous problem of gaussian process (GP) bandit optimization from the lens of privacy-preserving statistics.
We propose a solution for differentially private GP bandit optimization that combines a uniform kernel approximator with random perturbations.
Our algorithms maintain differential privacy throughout the optimization procedure and critically do not rely explicitly on the sample path for prediction.
arXiv Detail & Related papers (2021-02-24T18:52:24Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Private Stochastic Non-Convex Optimization: Adaptive Algorithms and
Tighter Generalization Bounds [72.63031036770425]
We propose differentially private (DP) algorithms for bound non-dimensional optimization.
We demonstrate two popular deep learning methods on the empirical advantages over standard gradient methods.
arXiv Detail & Related papers (2020-06-24T06:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.