On Computable Online Learning
- URL: http://arxiv.org/abs/2302.04357v1
- Date: Wed, 8 Feb 2023 22:09:06 GMT
- Title: On Computable Online Learning
- Authors: Niki Hasrati and Shai Ben-David
- Abstract summary: We show that the Littlestone dimension no longer characterizes the optimal mistake bound of c-online learning.
We show that c-online learning is as difficult as improper CPAC learning.
- Score: 8.655294504286635
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We initiate a study of computable online (c-online) learning, which we
analyze under varying requirements for "optimality" in terms of the mistake
bound. Our main contribution is to give a necessary and sufficient condition
for optimal c-online learning and show that the Littlestone dimension no longer
characterizes the optimal mistake bound of c-online learning. Furthermore, we
introduce anytime optimal (a-optimal) online learning, a more natural
conceptualization of "optimality" and a generalization of Littlestone's
Standard Optimal Algorithm. We show the existence of a computational separation
between a-optimal and optimal online learning, proving that a-optimal online
learning is computationally more difficult. Finally, we consider online
learning with no requirements for optimality, and show, under a weaker notion
of computability, that the finiteness of the Littlestone dimension no longer
characterizes whether a class is c-online learnable with finite mistake bound.
A potential avenue for strengthening this result is suggested by exploring the
relationship between c-online and CPAC learning, where we show that c-online
learning is as difficult as improper CPAC learning.
Related papers
- Learning-Augmented Algorithms for Online Concave Packing and Convex Covering Problems [4.9826534303287335]
We present learning-augmented algorithmic frameworks for two fundamental optimizations settings.
For online packing with concave objectives, we present a simple but overarching strategy that switches between the advice and the state-of-the-art online algorithm.
We show that our algorithms break impossibility results when the advice is accurate, while maintaining comparable performance with state-of-the-art classical online algorithms even when the advice is erroneous.
arXiv Detail & Related papers (2024-11-13T04:27:25Z) - Learning to Cover: Online Learning and Optimization with Irreversible Decisions [50.5775508521174]
We find that regret grows sub-linearly at a rate $Thetaleft(mfrac12cdotfrac11-2-Tright)$, thus converging exponentially fast to $Theta(sqrtm)$.
These findings underscore the benefits of limited online learning and optimization, in that even a few rounds can provide significant benefits as compared to a no-learning baseline.
arXiv Detail & Related papers (2024-06-20T23:00:25Z) - A Simple Learning-Augmented Algorithm for Online Packing with Concave Objectives [4.9826534303287335]
We introduce and analyze a simple learning-augmented algorithm for online packing problems with linear constraints and concave objectives.
We further raise the problem of understanding necessary and sufficient conditions for when such simple black-box solutions may be optimal.
arXiv Detail & Related papers (2024-06-05T18:39:28Z) - Discounted Adaptive Online Learning: Towards Better Regularization [5.5899168074961265]
We study online learning in adversarial nonstationary environments.
We propose an adaptive (i.e., instance optimal) algorithm that improves the widespread non-adaptive baseline.
We also consider the (Gibbs and Candes, 2021)-style online conformal prediction problem.
arXiv Detail & Related papers (2024-02-05T04:29:39Z) - Efficient Methods for Non-stationary Online Learning [67.3300478545554]
We present efficient methods for optimizing dynamic regret and adaptive regret, which reduce the number of projections per round from $mathcalO(log T)$ to $1$.
Our technique hinges on the reduction mechanism developed in parameter-free online learning and requires non-trivial twists on non-stationary online methods.
arXiv Detail & Related papers (2023-09-16T07:30:12Z) - Optimal Learners for Realizable Regression: PAC Learning and Online Learning [52.37726841759983]
We aim to characterize the statistical complexity of realizable regression both in the PAC learning setting and the online learning setting.
We first introduce a minimax instance optimal learner for realizable regression and propose a novel dimension that both qualitatively and quantitatively characterizes which classes of real-valued predictors are learnable.
In the context of online learning we provide a dimension that characterizes the minimax optimal instance optimal cumulative loss up to a constant factor and design an optimal online learner for realizable regression.
arXiv Detail & Related papers (2023-07-07T21:39:25Z) - Understanding Curriculum Learning in Policy Optimization for Online
Combinatorial Optimization [66.35750142827898]
This paper presents the first systematic study on policy optimization methods for online CO problems.
We show that online CO problems can be naturally formulated as latent Markov Decision Processes (LMDPs), and prove convergence bounds on natural policy gradient (NPG)
Furthermore, our theory explains the benefit of curriculum learning: it can find a strong sampling policy and reduce the distribution shift.
arXiv Detail & Related papers (2022-02-11T03:17:15Z) - Adversaries in Online Learning Revisited: with applications in Robust
Optimization and Adversarial training [55.30970087795483]
We revisit the concept of "adversary" in online learning, motivated by solving robust optimization and adversarial training problems.
We establish a general approach for a large variety of problem classes using imaginary play.
arXiv Detail & Related papers (2021-01-27T14:23:06Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.