On the Growth of Mistakes in Differentially Private Online Learning: A Lower Bound Perspective
- URL: http://arxiv.org/abs/2402.16778v3
- Date: Sun, 20 Oct 2024 17:58:11 GMT
- Title: On the Growth of Mistakes in Differentially Private Online Learning: A Lower Bound Perspective
- Authors: Daniil Dmitriev, Kristóf Szabó, Amartya Sanyal,
- Abstract summary: We provide lower bounds for Differentially Private (DP) Online Learning algorithms.
Our work is the first result towards settling lower bounds for DP-Online learning.
- Score: 8.104151304193216
- License:
- Abstract: In this paper, we provide lower bounds for Differentially Private (DP) Online Learning algorithms. Our result shows that, for a broad class of $(\varepsilon,\delta)$-DP online algorithms, for number of rounds $T$ such that $\log T\leq O(1 / \delta)$, the expected number of mistakes incurred by the algorithm grows as $\Omega(\log \frac{T}{\delta})$. This matches the upper bound obtained by Golowich and Livni (2021) and is in contrast to non-private online learning where the number of mistakes is independent of $T$. To the best of our knowledge, our work is the first result towards settling lower bounds for DP-Online learning and partially addresses the open question in Sanyal and Ramponi (2022).
Related papers
- Private Online Learning via Lazy Algorithms [52.772510023828744]
We study the problem of private online learning, specifically, online prediction from experts (OPE) and online convex optimization (OCO)
We propose a new transformation that transforms lazy online learning algorithms into private algorithms.
arXiv Detail & Related papers (2024-06-05T20:43:05Z) - Lower Bounds for Differential Privacy Under Continual Observation and Online Threshold Queries [31.339322747660635]
We show a new lower bound of $Omegaleft(minn,log Tright)$, which is tight w.r.t. the dependence on $T$.
We also show that our lower bound extends to the "online thresholds problem", where the goal is to privately answer many "quantile queries"
arXiv Detail & Related papers (2024-02-28T16:45:54Z) - A Trichotomy for Transductive Online Learning [32.03948071550447]
We present new upper and lower bounds on the number of learner mistakes in the transductive' online learning setting of Ben-David, Kushilevitz and Mansour (1997).
This setting is similar to standard online learning, except that the adversary fixes a sequence of instances to be labeled at the start of the game, and this sequence is known to the learner.
arXiv Detail & Related papers (2023-11-10T23:27:23Z) - Simple online learning with consistent oracle [55.43220407902113]
We consider online learning in the model where a learning algorithm can access the class only via the emphconsistent oracle -- an oracle, that, at any moment, can give a function from the class that agrees with all examples seen so far.
arXiv Detail & Related papers (2023-08-15T21:50:40Z) - Near-Optimal Differentially Private Reinforcement Learning [16.871660060209674]
We study online exploration in reinforcement learning with differential privacy constraints.
Existing work established that no-regret learning is possible under joint differential privacy (JDP) and local differential privacy (LDP)
We design an $epsilon$-JDP algorithm with a regret of $widetildeO(sqrtSAH2T+S2AH3/epsilon)$ which matches the information-theoretic lower bound of non-private learning.
arXiv Detail & Related papers (2022-12-09T06:03:02Z) - Private Online Prediction from Experts: Separations and Faster Rates [74.52487417350221]
Online prediction from experts is a fundamental problem in machine learning and several works have studied this problem under privacy constraints.
We propose and analyze new algorithms for this problem that improve over the regret bounds of the best existing algorithms for non-adaptive adversaries.
arXiv Detail & Related papers (2022-10-24T18:40:19Z) - Locally Differentially Private Reinforcement Learning for Linear Mixture
Markov Decision Processes [78.27542864367821]
Reinforcement learning (RL) algorithms can be used to provide personalized services, which rely on users' private and sensitive data.
To protect the users' privacy, privacy-preserving RL algorithms are in demand.
We propose a novel $(varepsilon, delta)$-LDP algorithm for learning a class of Markov decision processes (MDPs) dubbed linear mixture MDPs.
arXiv Detail & Related papers (2021-10-19T17:44:09Z) - Littlestone Classes are Privately Online Learnable [28.04975353867202]
We consider the problem of online classification under a privacy constraint.
In this setting a learner observes sequentially a stream of labelled examples $(x_t, y_t)$, for $1 leq t leq T$, and returns at each iteration a hypothesis $h_t$ which is used to predict the label of each new example $x_t$.
The learner's performance is measured by her regret against a known hypothesis class $mathcalH$.
arXiv Detail & Related papers (2021-06-25T09:08:33Z) - Online Apprenticeship Learning [58.45089581278177]
In Apprenticeship Learning (AL), we are given a Markov Decision Process (MDP) without access to the cost function.
The goal is to find a policy that matches the expert's performance on some predefined set of cost functions.
We show that the OAL problem can be effectively solved by combining two mirror descent based no-regret algorithms.
arXiv Detail & Related papers (2021-02-13T12:57:51Z) - Private Learning of Halfspaces: Simplifying the Construction and
Reducing the Sample Complexity [63.29100726064574]
We present a differentially private learner for halfspaces over a finite grid $G$ in $mathbbRd$ with sample complexity $approx d2.5cdot 2log*|G|$.
The building block for our learner is a new differentially private algorithm for approximately solving the linear feasibility problem.
arXiv Detail & Related papers (2020-04-16T16:12:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.