Private Online Learning via Lazy Algorithms
- URL: http://arxiv.org/abs/2406.03620v1
- Date: Wed, 5 Jun 2024 20:43:05 GMT
- Title: Private Online Learning via Lazy Algorithms
- Authors: Hilal Asi, Tomer Koren, Daogao Liu, Kunal Talwar,
- Abstract summary: We study the problem of private online learning, specifically, online prediction from experts (OPE) and online convex optimization (OCO)
We propose a new transformation that transforms lazy online learning algorithms into private algorithms.
- Score: 52.772510023828744
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of private online learning, specifically, online prediction from experts (OPE) and online convex optimization (OCO). We propose a new transformation that transforms lazy online learning algorithms into private algorithms. We apply our transformation for differentially private OPE and OCO using existing lazy algorithms for these problems. Our final algorithms obtain regret, which significantly improves the regret in the high privacy regime $\varepsilon \ll 1$, obtaining $\sqrt{T \log d} + T^{1/3} \log(d)/\varepsilon^{2/3}$ for DP-OPE and $\sqrt{T} + T^{1/3} \sqrt{d}/\varepsilon^{2/3}$ for DP-OCO. We also complement our results with a lower bound for DP-OPE, showing that these rates are optimal for a natural family of low-switching private algorithms.
Related papers
- On the Growth of Mistakes in Differentially Private Online Learning: A Lower Bound Perspective [8.104151304193216]
We provide lower bounds for Differentially Private (DP) Online Learning algorithms.
Our work is the first result towards settling lower bounds for DP-Online learning.
arXiv Detail & Related papers (2024-02-26T17:49:37Z) - Near-Optimal Algorithms for Private Online Optimization in the
Realizable Regime [74.52487417350221]
We consider online learning problems in the realizable setting, where there is a zero-loss solution.
We propose new Differentially Private (DP) algorithms that obtain near-optimal regret bounds.
arXiv Detail & Related papers (2023-02-27T21:19:24Z) - Private Online Prediction from Experts: Separations and Faster Rates [74.52487417350221]
Online prediction from experts is a fundamental problem in machine learning and several works have studied this problem under privacy constraints.
We propose and analyze new algorithms for this problem that improve over the regret bounds of the best existing algorithms for non-adaptive adversaries.
arXiv Detail & Related papers (2022-10-24T18:40:19Z) - Differentially Private Online-to-Batch for Smooth Losses [38.23708749658059]
We develop a new reduction that converts any online convex optimization algorithm suffering $O(sqrtT)$ regret into an $epsilon$-differentially private convex algorithm with the optimal convergence rate $tilde O(sqrtT + sqrtd/epsilon T)$ on smooth losses in linear time.
arXiv Detail & Related papers (2022-10-12T21:13:31Z) - Locally Differentially Private Reinforcement Learning for Linear Mixture
Markov Decision Processes [78.27542864367821]
Reinforcement learning (RL) algorithms can be used to provide personalized services, which rely on users' private and sensitive data.
To protect the users' privacy, privacy-preserving RL algorithms are in demand.
We propose a novel $(varepsilon, delta)$-LDP algorithm for learning a class of Markov decision processes (MDPs) dubbed linear mixture MDPs.
arXiv Detail & Related papers (2021-10-19T17:44:09Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.