Output Perturbation for Differentially Private Convex Optimization with
Improved Population Loss Bounds, Runtimes and Applications to Private
Adversarial Training
- URL: http://arxiv.org/abs/2102.04704v1
- Date: Tue, 9 Feb 2021 08:47:06 GMT
- Title: Output Perturbation for Differentially Private Convex Optimization with
Improved Population Loss Bounds, Runtimes and Applications to Private
Adversarial Training
- Authors: Andrew Lowy and Meisam Razaviyayn
- Abstract summary: Finding efficient, easily implementable differentially private (DP) algorithms that offer strong excess risk bounds is an important problem in modern machine learning.
We provide the tightest known $(epsilon, 0)$-DP population loss bounds and fastest runtimes under the presence of smoothness and strong convexity.
We apply our theory to two learning frameworks: tilted ERM and adversarial learning frameworks.
- Score: 12.386462516398469
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Finding efficient, easily implementable differentially private (DP)
algorithms that offer strong excess risk bounds is an important problem in
modern machine learning. To date, most work has focused on private empirical
risk minimization (ERM) or private population loss minimization. However, there
are often other objectives--such as fairness, adversarial robustness, or
sensitivity to outliers--besides average performance that are not captured in
the classical ERM setup. To this end, we study a completely general family of
convex, Lipschitz loss functions and establish the first known DP excess risk
and runtime bounds for optimizing this broad class. We provide similar bounds
under additional assumptions of smoothness and/or strong convexity. We also
address private stochastic convex optimization (SCO). While $(\epsilon,
\delta)$-DP ($\delta > 0$) has been the focus of much recent work in private
SCO, proving tight population loss bounds and runtime bounds for $(\epsilon,
0)$-DP remains a challenging open problem. We provide the tightest known
$(\epsilon, 0)$-DP population loss bounds and fastest runtimes under the
presence of (or lack of) smoothness and strong convexity. Our methods extend to
the $\delta > 0$ setting, where we offer the unique benefit of ensuring
differential privacy for arbitrary $\epsilon > 0$ by incorporating a new form
of Gaussian noise. Finally, we apply our theory to two learning frameworks:
tilted ERM and adversarial learning. In particular, our theory quantifies
tradeoffs between adversarial robustness, privacy, and runtime. Our results are
achieved using perhaps the simplest DP algorithm: output perturbation. Although
this method is not novel conceptually, our novel implementation scheme and
analysis show that the power of this method to achieve strong privacy, utility,
and runtime guarantees has not been fully appreciated in prior works.
Related papers
- Smoothed Normalization for Efficient Distributed Private Optimization [54.197255548244705]
Federated learning enables machine learning models with privacy of participants.
There is no differentially private distributed method for training, non-feedback problems.
We introduce a new distributed algorithm $alpha$-$sf NormEC$ with provable convergence guarantees.
arXiv Detail & Related papers (2025-02-19T07:10:32Z) - Linear-Time User-Level DP-SCO via Robust Statistics [55.350093142673316]
User-level differentially private convex optimization (DP-SCO) has garnered significant attention due to the importance of safeguarding user privacy in machine learning applications.
Current methods, such as those based on differentially private gradient descent (DP-SGD), often struggle with high noise accumulation and suboptimal utility.
We introduce a novel linear-time algorithm that leverages robust statistics, specifically the median and trimmed mean, to overcome these challenges.
arXiv Detail & Related papers (2025-02-13T02:05:45Z) - The Cost of Shuffling in Private Gradient Based Optimization [40.31928071333575]
We show that data shuffling results in worse empirical excess risk for textitDP-ShuffleG compared to DP-SGD.
We propose textitInterleaved-ShuffleG, a hybrid approach that integrates public data samples in private optimization.
arXiv Detail & Related papers (2025-02-05T22:30:00Z) - Optimized Tradeoffs for Private Prediction with Majority Ensembling [59.99331405291337]
We introduce the Data-dependent Randomized Response Majority (DaRRM) algorithm.
DaRRM is parameterized by a data-dependent noise function $gamma$, and enables efficient utility optimization over the class of all private algorithms.
We show that DaRRM provably enjoys a privacy gain of a factor of 2 over common baselines, with fixed utility.
arXiv Detail & Related papers (2024-11-27T00:48:48Z) - Private Online Prediction from Experts: Separations and Faster Rates [74.52487417350221]
Online prediction from experts is a fundamental problem in machine learning and several works have studied this problem under privacy constraints.
We propose and analyze new algorithms for this problem that improve over the regret bounds of the best existing algorithms for non-adaptive adversaries.
arXiv Detail & Related papers (2022-10-24T18:40:19Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Non-Euclidean Differentially Private Stochastic Convex Optimization [15.302167005107135]
We show that noisy gradient descent (SGD) algorithms attain the optimal excess risk in low-dimensional regimes.
Our work draws upon concepts from the geometry of normed spaces, such as the notions of regularity, uniform convexity, and uniform smoothness.
arXiv Detail & Related papers (2021-03-01T19:48:44Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.