Recursively Enumerably Representable Classes and Computable Versions of the Fundamental Theorem of Statistical Learning
- URL: http://arxiv.org/abs/2511.02644v1
- Date: Tue, 04 Nov 2025 15:12:38 GMT
- Title: Recursively Enumerably Representable Classes and Computable Versions of the Fundamental Theorem of Statistical Learning
- Authors: David Kattermann, Lothar Sebastian Krapp,
- Abstract summary: We study computable probably approximately correct (CPAC) learning, where learners are required to be computable functions.<n>Recent works recovered analogs of the Fundamental Theorem in the computable setting.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We study computable probably approximately correct (CPAC) learning, where learners are required to be computable functions. It had been previously observed that the Fundamental Theorem of Statistical Learning, which characterizes PAC learnability by finiteness of the Vapnik-Chervonenkis (VC-)dimension, no longer holds in this framework. Recent works recovered analogs of the Fundamental Theorem in the computable setting, for instance by introducing an effective VC-dimension. Guided by this, we investigate the connection between CPAC learning and recursively enumerable representable (RER) classes, whose members can be algorithmically listed. Our results show that the effective VC-dimensions can take arbitrary values above the traditional one, even for RER classes, which creates a whole family of (non-)examples for various notions of CPAC learning. Yet the two dimensions coincide for classes satisfying sufficiently strong notions of CPAC learning. We then observe that CPAC learnability can also be characterized via containment of RER classes that realize the same samples. Furthermore, it is shown that CPAC learnable classes satisfying a unique identification property are necessarily RER. Finally, we establish that agnostic learnability can be guaranteed for RER classes, by considering the relaxed notion of nonuniform CPAC learning.
Related papers
- Learning Conditional Averages [52.361762722359366]
We introduce the problem of learning conditional averages in the PAC framework.<n>Instead of learning the target concept itself, the goal is to predict, for each instance, the average label over its neighborhood.<n>More generally, it extends PAC learning to a setting that captures learning tasks arising in several domains.
arXiv Detail & Related papers (2026-02-12T13:20:29Z) - Sample Complexity of Agnostic Multiclass Classification: Natarajan Dimension Strikes Back [84.81507553557556]
We show that multiclass PAC sample complexity is governed by two distinct dimensions.<n>Unlike binary or online classification, multiclass learning inherently involves two structural parameters.<n>A key ingredient is a novel online procedure based on a self-adaptive multiplicative-weights algorithm performing a label-space reduction.
arXiv Detail & Related papers (2025-11-16T15:47:47Z) - A packing lemma for VCN${}_k$-dimension and learning high-dimensional data [1.6114012813668932]
We prove that non-agnostic high-arity PAC learnability implies a high-arity version of the Haussler packing property.<n>This is done by obtaining direct proofs that classic PAC learnability implies classic Haussler packing property.
arXiv Detail & Related papers (2025-05-21T16:03:12Z) - Towards Efficient Contrastive PAC Learning [6.209600119671225]
We study contrastive learning under the PAC learning framework.<n>In this paper, we consider contrastive learning of the fundamental concept of linear representations.<n>We establish guarantees based on Rademacher complexity, and connect it to PAC guarantees under certain contrastive large-margin conditions.
arXiv Detail & Related papers (2025-02-21T21:51:01Z) - On the Computability of Multiclass PAC Learning [9.507869508188266]
In the recently introduced computable PAC (CPAC) learning framework, both learners and the functions they output are required to be computable.<n>We show that the corresponding computable dimensions for distinguishers characterize CPAC learning.
arXiv Detail & Related papers (2025-02-10T01:07:23Z) - Computing the Vapnik Chervonenkis Dimension for Non-Discrete Settings [0.0]
We develop a method to approximately compute the VC dimension without constraints on the concept classes or their domain set.
Our approach is based on our finding that the Empirical Risk Minimization (ERM) learning paradigm can be used as a new tool to characterize the shattering property of a concept class.
arXiv Detail & Related papers (2023-08-19T14:57:24Z) - Optimal Learners for Realizable Regression: PAC Learning and Online Learning [52.37726841759983]
We aim to characterize the statistical complexity of realizable regression both in the PAC learning setting and the online learning setting.
We first introduce a minimax instance optimal learner for realizable regression and propose a novel dimension that both qualitatively and quantitatively characterizes which classes of real-valued predictors are learnable.
In the context of online learning we provide a dimension that characterizes the minimax optimal instance optimal cumulative loss up to a constant factor and design an optimal online learner for realizable regression.
arXiv Detail & Related papers (2023-07-07T21:39:25Z) - Multiclass Boosting: Simple and Intuitive Weak Learning Criteria [72.71096438538254]
We give a simple and efficient boosting algorithm, that does not require realizability assumptions.
We present a new result on boosting for list learners, as well as provide a novel proof for the characterization of multiclass PAC learning.
arXiv Detail & Related papers (2023-07-02T19:26:58Z) - Provably Efficient Representation Learning with Tractable Planning in
Low-Rank POMDP [81.00800920928621]
We study representation learning in partially observable Markov Decision Processes (POMDPs)
We first present an algorithm for decodable POMDPs that combines maximum likelihood estimation (MLE) and optimism in the face of uncertainty (OFU)
We then show how to adapt this algorithm to also work in the broader class of $gamma$-observable POMDPs.
arXiv Detail & Related papers (2023-06-21T16:04:03Z) - Using Representation Expressiveness and Learnability to Evaluate
Self-Supervised Learning Methods [61.49061000562676]
We introduce Cluster Learnability (CL) to assess learnability.
CL is measured in terms of the performance of a KNN trained to predict labels obtained by clustering the representations with K-means.
We find that CL better correlates with in-distribution model performance than other competing recent evaluation schemes.
arXiv Detail & Related papers (2022-06-02T19:05:13Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - On computable learning of continuous features [2.278415626131568]
We introduce definitions of computable PAC learning for binary classification over computable metric spaces.
We also give a presentation of a hypothesis class that does not admit any proper computable PAC learner with computable sample function.
arXiv Detail & Related papers (2021-11-24T02:28:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.