Output Perturbation for Differentially Private Convex Optimization with
Improved Population Loss Bounds, Runtimes and Applications to Private
Adversarial Training
- URL: http://arxiv.org/abs/2102.04704v1
- Date: Tue, 9 Feb 2021 08:47:06 GMT
- Title: Output Perturbation for Differentially Private Convex Optimization with
Improved Population Loss Bounds, Runtimes and Applications to Private
Adversarial Training
- Authors: Andrew Lowy and Meisam Razaviyayn
- Abstract summary: Finding efficient, easily implementable differentially private (DP) algorithms that offer strong excess risk bounds is an important problem in modern machine learning.
We provide the tightest known $(epsilon, 0)$-DP population loss bounds and fastest runtimes under the presence of smoothness and strong convexity.
We apply our theory to two learning frameworks: tilted ERM and adversarial learning frameworks.
- Score: 12.386462516398469
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Finding efficient, easily implementable differentially private (DP)
algorithms that offer strong excess risk bounds is an important problem in
modern machine learning. To date, most work has focused on private empirical
risk minimization (ERM) or private population loss minimization. However, there
are often other objectives--such as fairness, adversarial robustness, or
sensitivity to outliers--besides average performance that are not captured in
the classical ERM setup. To this end, we study a completely general family of
convex, Lipschitz loss functions and establish the first known DP excess risk
and runtime bounds for optimizing this broad class. We provide similar bounds
under additional assumptions of smoothness and/or strong convexity. We also
address private stochastic convex optimization (SCO). While $(\epsilon,
\delta)$-DP ($\delta > 0$) has been the focus of much recent work in private
SCO, proving tight population loss bounds and runtime bounds for $(\epsilon,
0)$-DP remains a challenging open problem. We provide the tightest known
$(\epsilon, 0)$-DP population loss bounds and fastest runtimes under the
presence of (or lack of) smoothness and strong convexity. Our methods extend to
the $\delta > 0$ setting, where we offer the unique benefit of ensuring
differential privacy for arbitrary $\epsilon > 0$ by incorporating a new form
of Gaussian noise. Finally, we apply our theory to two learning frameworks:
tilted ERM and adversarial learning. In particular, our theory quantifies
tradeoffs between adversarial robustness, privacy, and runtime. Our results are
achieved using perhaps the simplest DP algorithm: output perturbation. Although
this method is not novel conceptually, our novel implementation scheme and
analysis show that the power of this method to achieve strong privacy, utility,
and runtime guarantees has not been fully appreciated in prior works.
Related papers
- Efficient Sparse Least Absolute Deviation Regression with Differential
Privacy [10.414082115202003]
We develop a fast privacy-preserving learning solution for a sparse robust regression problem.
Our algorithm achieves a fast estimation by reformulating the sparse LAD problem as a penalized least square estimation problem.
We show that our algorithm can achieve better privacy and statistical accuracy trade-off compared with the state-of-the-art privacy-preserving regression algorithms.
arXiv Detail & Related papers (2024-01-02T17:13:34Z) - A Generalized Shuffle Framework for Privacy Amplification: Strengthening Privacy Guarantees and Enhancing Utility [4.7712438974100255]
We show how to shuffle $(epsilon_i,delta_i)$-PLDP setting with personalized privacy parameters.
We prove that shuffled $(epsilon_i,delta_i)$-PLDP process approximately preserves $mu$-Gaussian Differential Privacy with mu = sqrtfrac2sum_i=1n frac1-delta_i1+eepsilon_i-max_ifrac1-delta_i1+e
arXiv Detail & Related papers (2023-12-22T02:31:46Z) - Private Online Prediction from Experts: Separations and Faster Rates [74.52487417350221]
Online prediction from experts is a fundamental problem in machine learning and several works have studied this problem under privacy constraints.
We propose and analyze new algorithms for this problem that improve over the regret bounds of the best existing algorithms for non-adaptive adversaries.
arXiv Detail & Related papers (2022-10-24T18:40:19Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - Optimal Rates of (Locally) Differentially Private Heavy-tailed
Multi-Armed Bandits [11.419534203345187]
We study the problem of multi-armed bandits (MAB) in the (local) differential privacy (DP/LDP) model.
We propose an algorithm which could be seen as locally private and robust version of the Successive Elimination (SE) algorithm.
arXiv Detail & Related papers (2021-06-04T16:17:21Z) - Non-Euclidean Differentially Private Stochastic Convex Optimization [15.302167005107135]
We show that noisy gradient descent (SGD) algorithms attain the optimal excess risk in low-dimensional regimes.
Our work draws upon concepts from the geometry of normed spaces, such as the notions of regularity, uniform convexity, and uniform smoothness.
arXiv Detail & Related papers (2021-03-01T19:48:44Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy
Amplification by Shuffling [49.43288037509783]
We show that random shuffling amplifies differential privacy guarantees of locally randomized data.
Our result is based on a new approach that is simpler than previous work and extends to approximate differential privacy with nearly the same guarantees.
arXiv Detail & Related papers (2020-12-23T17:07:26Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.