Computable learning of natural hypothesis classes
- URL: http://arxiv.org/abs/2407.16663v1
- Date: Tue, 23 Jul 2024 17:26:38 GMT
- Title: Computable learning of natural hypothesis classes
- Authors: Matthew Harrison-Trainor, Syed Akbari,
- Abstract summary: Examples have recently been given of hypothesis classes which are PAC learnable but not computably PAC learnable.
We use the on-a-cone machinery from computability theory to prove that, under mild assumptions such as that the hypothesis class can be computably listable, any natural hypothesis class which is learnable must be computably learnable.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper is about the recent notion of computably probably approximately correct learning, which lies between the statistical learning theory where there is no computational requirement on the learner and efficient PAC where the learner must be polynomially bounded. Examples have recently been given of hypothesis classes which are PAC learnable but not computably PAC learnable, but these hypothesis classes are unnatural or non-canonical in the sense that they depend on a numbering of proofs, formulas, or programs. We use the on-a-cone machinery from computability theory to prove that, under mild assumptions such as that the hypothesis class can be computably listable, any natural hypothesis class which is learnable must be computably learnable. Thus the counterexamples given previously are necessarily unnatural.
Related papers
- Credal Learning Theory [4.64390130376307]
We lay the foundations for a credal' theory of learning, using convex sets of probabilities to model the variability in the data-generating distribution.
Bounds are derived for the case of finite hypotheses spaces as well as infinite model spaces, which directly generalize classical results.
arXiv Detail & Related papers (2024-02-01T19:25:58Z) - Optimal Multi-Distribution Learning [88.3008613028333]
Multi-distribution learning seeks to learn a shared model that minimizes the worst-case risk across $k$ distinct data distributions.
We propose a novel algorithm that yields an varepsilon-optimal randomized hypothesis with a sample complexity on the order of (d+k)/varepsilon2.
arXiv Detail & Related papers (2023-12-08T16:06:29Z) - Multiclass Boosting: Simple and Intuitive Weak Learning Criteria [72.71096438538254]
We give a simple and efficient boosting algorithm, that does not require realizability assumptions.
We present a new result on boosting for list learners, as well as provide a novel proof for the characterization of multiclass PAC learning.
arXiv Detail & Related papers (2023-07-02T19:26:58Z) - Learnability, Sample Complexity, and Hypothesis Class Complexity for
Regression Models [10.66048003460524]
This work is inspired by the foundation of PAC and is motivated by the existing regression learning issues.
The proposed approach, denoted by epsilon-Confidence Approximately Correct (epsilon CoAC), utilizes Kullback Leibler divergence (relative entropy)
It enables the learner to compare hypothesis classes of different complexity orders and choose among them the optimum with the minimum epsilon.
arXiv Detail & Related papers (2023-03-28T15:59:12Z) - Comparative Learning: A Sample Complexity Theory for Two Hypothesis
Classes [5.194264506657145]
We introduce comparative learning as a combination of the realizable and agnostic settings in PAC learning.
Even when both $S$ and $B$ have infinite VC dimensions, comparative learning can still have a small sample complexity.
We show that the sample complexity of comparative learning is characterized by the mutual VC dimension $mathsfVC(S,B)$.
arXiv Detail & Related papers (2022-11-16T18:38:24Z) - Learning versus Refutation in Noninteractive Local Differential Privacy [133.80204506727526]
We study two basic statistical tasks in non-interactive local differential privacy (LDP): learning and refutation.
Our main result is a complete characterization of the sample complexity of PAC learning for non-interactive LDP protocols.
arXiv Detail & Related papers (2022-10-26T03:19:24Z) - A Theory of PAC Learnability of Partial Concept Classes [30.772106555607458]
We extend the theory of PAC learning in a way which allows to model a rich variety of learning tasks.
We characterize PAC learnability of partial concept classes and reveal an algorithmic landscape which is fundamentally different from the classical one.
arXiv Detail & Related papers (2021-07-18T13:29:26Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z) - Empirically Verifying Hypotheses Using Reinforcement Learning [58.09414653169534]
This paper formulates hypothesis verification as an RL problem.
We aim to build an agent that, given a hypothesis about the dynamics of the world, can take actions to generate observations which can help predict whether the hypothesis is true or false.
arXiv Detail & Related papers (2020-06-29T01:01:10Z) - L2R2: Leveraging Ranking for Abductive Reasoning [65.40375542988416]
The abductive natural language inference task ($alpha$NLI) is proposed to evaluate the abductive reasoning ability of a learning system.
A novel $L2R2$ approach is proposed under the learning-to-rank framework.
Experiments on the ART dataset reach the state-of-the-art in the public leaderboard.
arXiv Detail & Related papers (2020-05-22T15:01:23Z) - Learning programs by learning from failures [26.955785230358963]
We describe an inductive logic programming (ILP) approach called learning from failures.
In this approach, an ILP system decomposes the learning problem into three separate stages: generate, test, and constrain.
We introduce Popper, an ILP system that implements this approach by combining answer set programming and Prolog.
arXiv Detail & Related papers (2020-05-05T14:55:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.