Private Online Prediction from Experts: Separations and Faster Rates
- URL: http://arxiv.org/abs/2210.13537v3
- Date: Thu, 29 Jun 2023 19:05:55 GMT
- Title: Private Online Prediction from Experts: Separations and Faster Rates
- Authors: Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar
- Abstract summary: Online prediction from experts is a fundamental problem in machine learning and several works have studied this problem under privacy constraints.
We propose and analyze new algorithms for this problem that improve over the regret bounds of the best existing algorithms for non-adaptive adversaries.
- Score: 74.52487417350221
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online prediction from experts is a fundamental problem in machine learning
and several works have studied this problem under privacy constraints. We
propose and analyze new algorithms for this problem that improve over the
regret bounds of the best existing algorithms for non-adaptive adversaries. For
approximate differential privacy, our algorithms achieve regret bounds of
$\tilde{O}(\sqrt{T \log d} + \log d/\varepsilon)$ for the stochastic setting
and $\tilde{O}(\sqrt{T \log d} + T^{1/3} \log d/\varepsilon)$ for oblivious
adversaries (where $d$ is the number of experts). For pure DP, our algorithms
are the first to obtain sub-linear regret for oblivious adversaries in the
high-dimensional regime $d \ge T$. Moreover, we prove new lower bounds for
adaptive adversaries. Our results imply that unlike the non-private setting,
there is a strong separation between the optimal regret for adaptive and
non-adaptive adversaries for this problem. Our lower bounds also show a
separation between pure and approximate differential privacy for adaptive
adversaries where the latter is necessary to achieve the non-private
$O(\sqrt{T})$ regret.
Related papers
- Federated Online Prediction from Experts with Differential Privacy: Separations and Regret Speed-ups [6.925352969721467]
We study the problems of differentially private online prediction from experts against both adversaries and oblivious adversaries.
With adversaries, we propose a Fed-DP-OPE-Stoch algorithm that achieves $sqrtm$-fold speed-up of the per-client regret.
With oblivious adversaries, we establish non-trivial lower bounds indicating that collaboration among clients does not lead to regret speed-up.
arXiv Detail & Related papers (2024-09-27T18:43:24Z) - Differentially Private Multiway and $k$-Cut [5.893651469750359]
We introduce edge-differentially private algorithms that achieve nearly optimal performance for minimum $k$-cut and multiway cut problems.
For the minimum $k$-cut problem, our algorithms leverage a known bound on the number of approximate $k$-cuts, resulting in a private algorithm with optimal additive error $O(klog n)$ for fixed privacy parameter.
arXiv Detail & Related papers (2024-07-09T14:46:33Z) - Private Online Learning via Lazy Algorithms [52.772510023828744]
We study the problem of private online learning, specifically, online prediction from experts (OPE) and online convex optimization (OCO)
We propose a new transformation that transforms lazy online learning algorithms into private algorithms.
arXiv Detail & Related papers (2024-06-05T20:43:05Z) - DP-Dueling: Learning from Preference Feedback without Compromising User Privacy [32.58099924135157]
We give the first differentially private dueling bandit algorithm for active learning with user preferences.
Our algorithms are computationally efficient with near-optimal performance.
We extend our results to any general decision space in $d$-dimensions with potentially infinite arms.
arXiv Detail & Related papers (2024-03-22T09:02:12Z) - Near-Optimal Algorithms for Private Online Optimization in the
Realizable Regime [74.52487417350221]
We consider online learning problems in the realizable setting, where there is a zero-loss solution.
We propose new Differentially Private (DP) algorithms that obtain near-optimal regret bounds.
arXiv Detail & Related papers (2023-02-27T21:19:24Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - Regret Lower Bound and Optimal Algorithm for High-Dimensional Contextual
Linear Bandit [10.604939762790517]
We prove a minimax lower bound, $mathcalObig((log d)fracalpha+12Tfrac1-alpha2+log Tbig)$, for the cumulative regret.
Second, we propose a simple and computationally efficient algorithm inspired by the general Upper Confidence Bound (UCB) strategy.
arXiv Detail & Related papers (2021-09-23T19:35:38Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Output Perturbation for Differentially Private Convex Optimization with
Improved Population Loss Bounds, Runtimes and Applications to Private
Adversarial Training [12.386462516398469]
Finding efficient, easily implementable differentially private (DP) algorithms that offer strong excess risk bounds is an important problem in modern machine learning.
We provide the tightest known $(epsilon, 0)$-DP population loss bounds and fastest runtimes under the presence of smoothness and strong convexity.
We apply our theory to two learning frameworks: tilted ERM and adversarial learning frameworks.
arXiv Detail & Related papers (2021-02-09T08:47:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.