Computable universal online learning
- URL: http://arxiv.org/abs/2510.18352v1
- Date: Tue, 21 Oct 2025 07:21:32 GMT
- Title: Computable universal online learning
- Authors: Dariusz KalociĆski, Tomasz Steifer,
- Abstract summary: We show that universal online learning does not imply computable universal online learning.<n>We also consider a variant of proper universal online learning and show exactly when it is possible.
- Score: 1.7961752949636303
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding when learning is possible is a fundamental task in the theory of machine learning. However, many characterizations known from the literature deal with abstract learning as a mathematical object and ignore the crucial question: when can learning be implemented as a computer program? We address this question for universal online learning, a generalist theoretical model of online binary classification, recently characterized by Bousquet et al. (STOC'21). In this model, there is no hypothesis fixed in advance; instead, Adversary -- playing the role of Nature -- can change their mind as long as local consistency with the given class of hypotheses is maintained. We require Learner to achieve a finite number of mistakes while using a strategy that can be implemented as a computer program. We show that universal online learning does not imply computable universal online learning, even if the class of hypotheses is relatively easy from a computability-theoretic perspective. We then study the agnostic variant of computable universal online learning and provide an exact characterization of classes that are learnable in this sense. We also consider a variant of proper universal online learning and show exactly when it is possible. Together, our results give a more realistic perspective on the existing theory of online binary classification and the related problem of inductive inference.
Related papers
- AI Agents as Universal Task Solvers [94.49762121230042]
We show that the optimal speed-up that a universal solver can achieve using past data is tightly related to their algorithmic information.<n>We argue that the key quantity to optimize when scaling reasoning models is time, whose critical role in learning has so far only been indirectly considered.
arXiv Detail & Related papers (2025-10-14T02:17:54Z) - A Theory of Optimistically Universal Online Learnability for General Concept Classes [15.830099254570955]
We provide a full characterization of the concept classes that are optimistically universally online learnable with $0, 1$ labels.<n>The notion of optimistically universal online learning was defined in [Hanneke, 2021] in order to understand learnability under minimal assumptions.
arXiv Detail & Related papers (2025-01-15T03:20:16Z) - A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Learning principle and mathematical realization of the learning
mechanism in the brain [0.0]
We call it learning principle, and it follows that all learning is equivalent to estimating the probability of input data.
We show that conventional supervised learning is equivalent to estimating conditional probabilities, and succeeded in making supervised learning more effective and generalized.
We propose a new method of defining the values of estimated probability using differentiation, and show that unsupervised learning can be performed on arbitrary dataset without any prior knowledge.
arXiv Detail & Related papers (2023-11-22T12:08:01Z) - Ticketed Learning-Unlearning Schemes [57.89421552780526]
We propose a new ticketed model for learning--unlearning.
We provide space-efficient ticketed learning--unlearning schemes for a broad family of concept classes.
arXiv Detail & Related papers (2023-06-27T18:54:40Z) - Learnability with Time-Sharing Computational Resource Concerns [65.268245109828]
We present a theoretical framework that takes into account the influence of computational resources in learning theory.
This framework can be naturally applied to stream learning where the incoming data streams can be potentially endless.
It may also provide a theoretical perspective for the design of intelligent supercomputing operating systems.
arXiv Detail & Related papers (2023-05-03T15:54:23Z) - The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning [80.1018596899899]
We argue that neural network models share this same preference, formalized using Kolmogorov complexity.
Our experiments show that pre-trained and even randomly language models prefer to generate low-complexity sequences.
These observations justify the trend in deep learning of unifying seemingly disparate problems with an increasingly small set of machine learning models.
arXiv Detail & Related papers (2023-04-11T17:22:22Z) - Universal Online Learning: an Optimistically Universal Learning Rule [0.0]
We study the subject of universal online learning with non-i.i.d. processes for bounded losses.
We show that k-nearest neighbor algorithm (kNN) is not optimistically universal and present a novel variant of 1NN.
arXiv Detail & Related papers (2022-01-16T02:13:47Z) - From Undecidability of Non-Triviality and Finiteness to Undecidability
of Learnability [0.0]
We show that there is no general-purpose procedure for rigorously evaluating whether newly proposed models indeed successfully learn from data.
For PAC binary classification, uniform and universal online learning, and exact learning through teacher-learner interactions, learnability is in general undecidable.
There is no one-size-fits-all algorithm for deciding whether a machine learning model can be successful.
arXiv Detail & Related papers (2021-06-02T18:00:04Z) - Exploring Bayesian Deep Learning for Urgent Instructor Intervention Need
in MOOC Forums [58.221459787471254]
Massive Open Online Courses (MOOCs) have become a popular choice for e-learning thanks to their great flexibility.
Due to large numbers of learners and their diverse backgrounds, it is taxing to offer real-time support.
With the large volume of posts and high workloads for MOOC instructors, it is unlikely that the instructors can identify all learners requiring intervention.
This paper explores for the first time Bayesian deep learning on learner-based text posts with two methods: Monte Carlo Dropout and Variational Inference.
arXiv Detail & Related papers (2021-04-26T15:12:13Z) - A Theory of Universal Learning [26.51949485387526]
We show that there are only three possible rates of universal learning.
We show that the learning curves of any given concept class decay either at an exponential, or arbitrarily slow rates.
arXiv Detail & Related papers (2020-11-09T15:10:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.