Optimal Prediction Using Expert Advice and Randomized Littlestone
Dimension
- URL: http://arxiv.org/abs/2302.13849v3
- Date: Thu, 17 Aug 2023 18:35:27 GMT
- Title: Optimal Prediction Using Expert Advice and Randomized Littlestone
Dimension
- Authors: Yuval Filmus, Steve Hanneke, Idan Mehalel and Shay Moran
- Abstract summary: A classical result characterizes the optimal mistake bound achievable by deterministic learners using the Littlestone dimension.
We show that the optimal expected mistake bound in learning a class $mathcalH$ equals its randomized Littlestone dimension.
- Score: 32.29254118429081
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A classical result in online learning characterizes the optimal mistake bound
achievable by deterministic learners using the Littlestone dimension
(Littlestone '88). We prove an analogous result for randomized learners: we
show that the optimal expected mistake bound in learning a class $\mathcal{H}$
equals its randomized Littlestone dimension, which is the largest $d$ for which
there exists a tree shattered by $\mathcal{H}$ whose average depth is $2d$. We
further study optimal mistake bounds in the agnostic case, as a function of the
number of mistakes made by the best function in $\mathcal{H}$, denoted by $k$.
We show that the optimal randomized mistake bound for learning a class with
Littlestone dimension $d$ is $k + \Theta (\sqrt{k d} + d )$. This also implies
an optimal deterministic mistake bound of $2k + \Theta(d) + O(\sqrt{k d})$,
thus resolving an open question which was studied by Auer and Long ['99].
As an application of our theory, we revisit the classical problem of
prediction using expert advice: about 30 years ago Cesa-Bianchi, Freund,
Haussler, Helmbold, Schapire and Warmuth studied prediction using expert
advice, provided that the best among the $n$ experts makes at most $k$
mistakes, and asked what are the optimal mistake bounds. Cesa-Bianchi, Freund,
Helmbold, and Warmuth ['93, '96] provided a nearly optimal bound for
deterministic learners, and left the randomized case as an open problem. We
resolve this question by providing an optimal learning rule in the randomized
case, and showing that its expected mistake bound equals half of the
deterministic bound of Cesa-Bianchi et al. ['93,'96], up to negligible additive
terms. In contrast with previous works by Abernethy, Langford, and Warmuth
['06], and by Br\^anzei and Peres ['19], our result applies to all pairs $n,k$.
Related papers
- Deterministic Apple Tasting [2.4554686192257424]
We provide the first widely-applicable deterministic apple tasting learner.
We prove a trichotomy stating that every class $mathcalH$ must be either easy, hard, or unlearnable.
Our upper bound is based on a deterministic algorithm for learning from expert advice with apple tasting feedback.
arXiv Detail & Related papers (2024-10-14T11:54:46Z) - Bandit-Feedback Online Multiclass Classification: Variants and Tradeoffs [32.29254118429081]
We show that the optimal mistake bound under bandit feedback is at most $O(k)$ times higher than the optimal mistake bound in the full information case.
We also present nearly optimal bounds of $tildeTheta(k)$ on the gap between randomized and deterministic learners.
arXiv Detail & Related papers (2024-02-12T07:20:05Z) - Streaming Algorithms for Learning with Experts: Deterministic Versus
Robust [62.98860182111096]
In the online learning with experts problem, an algorithm must make a prediction about an outcome on each of $T$ days (or times)
The goal is to make a prediction with the minimum cost, specifically compared to the best expert in the set.
We show a space lower bound of $widetildeOmegaleft(fracnMRTright)$ for any deterministic algorithm that achieves regret $R$ when the best expert makes $M$ mistakes.
arXiv Detail & Related papers (2023-03-03T04:39:53Z) - Fast Rates for Nonparametric Online Learning: From Realizability to
Learning in Games [36.969021834291745]
We propose a proper learning algorithm which gets a near-optimal mistake bound in terms of the sequential fat-shattering dimension of the hypothesis class.
This result answers a question as to whether proper learners could achieve near-optimal mistake bounds.
For the real-valued (regression) setting, the optimal mistake bound was not even known for improper learners.
arXiv Detail & Related papers (2021-11-17T05:24:21Z) - Locality defeats the curse of dimensionality in convolutional
teacher-student scenarios [69.2027612631023]
We show that locality is key in determining the learning curve exponent $beta$.
We conclude by proving, using a natural assumption, that performing kernel regression with a ridge that decreases with the size of the training set leads to similar learning curve exponents to those we obtain in the ridgeless case.
arXiv Detail & Related papers (2021-06-16T08:27:31Z) - Provably Breaking the Quadratic Error Compounding Barrier in Imitation
Learning, Optimally [58.463668865380946]
We study the statistical limits of Imitation Learning in episodic Markov Decision Processes (MDPs) with a state space $mathcalS$.
We establish an upper bound $O(|mathcalS|H3/2/N)$ for the suboptimality using the Mimic-MD algorithm in Rajaraman et al ( 2020)
We show the minimax suboptimality grows as $Omega( H3/2/N)$ when $mathcalS|geq 3$ while the unknown-transition setting suffers from a larger sharp rate
arXiv Detail & Related papers (2021-02-25T15:50:19Z) - Online Learning with Simple Predictors and a Combinatorial
Characterization of Minimax in 0/1 Games [38.15628332832227]
We show how to always achieve nearly optimal mistake/regret bounds using "simple" predictors.
A technical ingredient of our proof is a generalization of the celebrated Minimax Theorem for binary zero-sum games.
arXiv Detail & Related papers (2021-02-02T18:02:01Z) - Hardness of Learning Halfspaces with Massart Noise [56.98280399449707]
We study the complexity of PAC learning halfspaces in the presence of Massart (bounded) noise.
We show that there an exponential gap between the information-theoretically optimal error and the best error that can be achieved by a SQ algorithm.
arXiv Detail & Related papers (2020-12-17T16:43:11Z) - Maximizing Determinants under Matroid Constraints [69.25768526213689]
We study the problem of finding a basis $S$ of $M$ such that $det(sum_i in Sv_i v_i v_itop)$ is maximized.
This problem appears in a diverse set of areas such as experimental design, fair allocation of goods, network design, and machine learning.
arXiv Detail & Related papers (2020-04-16T19:16:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.