A Theory of Optimistically Universal Online Learnability for General Concept Classes
- URL: http://arxiv.org/abs/2501.08551v1
- Date: Wed, 15 Jan 2025 03:20:16 GMT
- Title: A Theory of Optimistically Universal Online Learnability for General Concept Classes
- Authors: Steve Hanneke, Hongao Wang,
- Abstract summary: We provide a full characterization of the concept classes that are optimistically universally online learnable with $0, 1$ labels.
The notion of optimistically universal online learning was defined in [Hanneke, 2021] in order to understand learnability under minimal assumptions.
- Score: 15.830099254570955
- License:
- Abstract: We provide a full characterization of the concept classes that are optimistically universally online learnable with $\{0, 1\}$ labels. The notion of optimistically universal online learning was defined in [Hanneke, 2021] in order to understand learnability under minimal assumptions. In this paper, following the philosophy behind that work, we investigate two questions, namely, for every concept class: (1) What are the minimal assumptions on the data process admitting online learnability? (2) Is there a learning algorithm which succeeds under every data process satisfying the minimal assumptions? Such an algorithm is said to be optimistically universal for the given concept class. We resolve both of these questions for all concept classes, and moreover, as part of our solution, we design general learning algorithms for each case. Finally, we extend these algorithms and results to the agnostic case, showing an equivalence between the minimal assumptions on the data process for learnability in the agnostic and realizable cases, for every concept class, as well as the equivalence of optimistically universal learnability.
Related papers
- Probably Approximately Precision and Recall Learning [62.912015491907994]
Precision and Recall are foundational metrics in machine learning.
One-sided feedback--where only positive examples are observed during training--is inherent in many practical problems.
We introduce a PAC learning framework where each hypothesis is represented by a graph, with edges indicating positive interactions.
arXiv Detail & Related papers (2024-11-20T04:21:07Z) - Coding for Intelligence from the Perspective of Category [66.14012258680992]
Coding targets compressing and reconstructing data, and intelligence.
Recent trends demonstrate the potential homogeneity of these two fields.
We propose a novel problem of Coding for Intelligence from the category theory view.
arXiv Detail & Related papers (2024-07-01T07:05:44Z) - Ticketed Learning-Unlearning Schemes [57.89421552780526]
We propose a new ticketed model for learning--unlearning.
We provide space-efficient ticketed learning--unlearning schemes for a broad family of concept classes.
arXiv Detail & Related papers (2023-06-27T18:54:40Z) - Online Learning and Disambiguations of Partial Concept Classes [0.7264378254137809]
In a recent article, Alon, Hanneke, Holzman, and Moran introduced a unifying framework to study the learnability of classes of partial concepts.
They showed this is not the case for PAC learning but left the problem open for the stronger notion of online learnability.
We resolve this problem by constructing a class of partial concepts that is online learnable, but no extension of it to a class of total concepts is online learnable.
arXiv Detail & Related papers (2023-03-30T17:46:50Z) - Towards a Holistic Understanding of Mathematical Questions with
Contrastive Pre-training [65.10741459705739]
We propose a novel contrastive pre-training approach for mathematical question representations, namely QuesCo.
We first design two-level question augmentations, including content-level and structure-level, which generate literally diverse question pairs with similar purposes.
Then, to fully exploit hierarchical information of knowledge concepts, we propose a knowledge hierarchy-aware rank strategy.
arXiv Detail & Related papers (2023-01-18T14:23:29Z) - Bayesian Learning for Neural Networks: an algorithmic survey [95.42181254494287]
This self-contained survey engages and introduces readers to the principles and algorithms of Bayesian Learning for Neural Networks.
It provides an introduction to the topic from an accessible, practical-algorithmic perspective.
arXiv Detail & Related papers (2022-11-21T21:36:58Z) - Multiclass Learnability Beyond the PAC Framework: Universal Rates and
Partial Concept Classes [31.2676304636432]
We study the problem of multiclass classification with a bounded number of different labels $k$, in the realizable setting.
We extend the traditional PAC model to a) distribution-dependent learning rates, and b) learning rates under data-dependent assumptions.
arXiv Detail & Related papers (2022-10-05T14:36:27Z) - The no-free-lunch theorems of supervised learning [0.0]
The no-free-lunch theorems promote a skeptical conclusion that all possible machine learning algorithms equally lack justification.
We argue that many standard learning algorithms should rather be understood as model-dependent.
arXiv Detail & Related papers (2022-02-09T15:24:30Z) - Universal Online Learning: an Optimistically Universal Learning Rule [0.0]
We study the subject of universal online learning with non-i.i.d. processes for bounded losses.
We show that k-nearest neighbor algorithm (kNN) is not optimistically universal and present a novel variant of 1NN.
arXiv Detail & Related papers (2022-01-16T02:13:47Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z) - On Learnability under General Stochastic Processes [20.22409095000365]
Statistical learning under general non-iid processes is less mature.
We provide two natural notions of learnability of a function class under a general process.
Our results hold for both binary classification and regression.
arXiv Detail & Related papers (2020-05-15T15:49:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.