Find a witness or shatter: the landscape of computable PAC learning
- URL: http://arxiv.org/abs/2302.04731v1
- Date: Mon, 6 Feb 2023 02:52:36 GMT
- Title: Find a witness or shatter: the landscape of computable PAC learning
- Authors: Valentino Delle Rose, Alexander Kozachinskiy, Cristobal Rojas, Tomasz
Steifer
- Abstract summary: This paper contributes to the study of CPAC learnability by solving three open questions from recent papers.
Firstly, we prove that every CPAC learnable class is contained in a class which is properly CPAC learnable with a sample complexity.
Secondly, we show that there exists a decidable class of hypothesis which is properly learnable, but only with uncomputably fast growing sample complexity.
- Score: 63.26015736148707
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper contributes to the study of CPAC learnability -- a computable
version of PAC learning -- by solving three open questions from recent papers.
Firstly, we prove that every improperly CPAC learnable class is contained in a
class which is properly CPAC learnable with polynomial sample complexity. This
confirms a conjecture by Agarwal et al (COLT 2021). Secondly, we show that
there exists a decidable class of hypothesis which is properly CPAC learnable,
but only with uncomputably fast growing sample complexity. This solves a
question from Sterkenburg (COLT 2022). Finally, we construct a decidable class
of finite Littlestone dimension which is not improperly CPAC learnable,
strengthening a recent result of Sterkenburg (2022) and answering a question
posed by Hasrati and Ben-David (ALT 2023). Together with previous work, our
results provide a complete landscape for the learnability problem in the CPAC
setting.
Related papers
- On the Computability of Robust PAC Learning [9.507869508188266]
We introduce the problem of robust computable PAC (robust CPAC) learning.
We show that learnability in this setup is not implied by the combination of its components.
We prove that its finiteness is necessary, but not sufficient for robust CPAC learnability.
arXiv Detail & Related papers (2024-06-14T16:20:04Z) - Private PAC Learning May be Harder than Online Learning [14.180842831990999]
We show that any concept class of Littlestone dimension $d$ can be privately PAC learned using $mathrmpoly(d)$ samples.
This raises the natural question of whether there might be a generic conversion from online learners to private PAC learners.
arXiv Detail & Related papers (2024-02-16T22:44:52Z) - When is Agnostic Reinforcement Learning Statistically Tractable? [76.1408672715773]
A new complexity measure, called the emphspanning capacity, depends solely on the set $Pi$ and is independent of the MDP dynamics.
We show there exists a policy class $Pi$ with a bounded spanning capacity that requires a superpolynomial number of samples to learn.
This reveals a surprising separation for learnability between generative access and online access models.
arXiv Detail & Related papers (2023-10-09T19:40:54Z) - Multiclass Boosting: Simple and Intuitive Weak Learning Criteria [72.71096438538254]
We give a simple and efficient boosting algorithm, that does not require realizability assumptions.
We present a new result on boosting for list learners, as well as provide a novel proof for the characterization of multiclass PAC learning.
arXiv Detail & Related papers (2023-07-02T19:26:58Z) - On characterizations of learnability with computable learners [0.0]
We study computable PAC (CPAC) learning as introduced by Agarwal et al.
We give a characterization of a closely related notion of strong CPAC learning.
We consider undecidability of (computable) PAC learnability.
arXiv Detail & Related papers (2022-02-10T13:57:20Z) - Semi-verified PAC Learning from the Crowd [7.594050968868919]
We study the problem of crowdsourced PAC learning of threshold functions.
We show that under the semi-verified model of Charikar et al., it is possible to PAC learn the underlying hypothesis class with a manageable amount of label queries.
arXiv Detail & Related papers (2021-06-13T20:05:16Z) - Sample-efficient proper PAC learning with approximate differential
privacy [51.09425023771246]
We prove that the sample complexity of properly learning a class of Littlestone dimension $d$ with approximate differential privacy is $tilde O(d6)$, ignoring privacy and accuracy parameters.
This result answers a question of Bun et al. (FOCS 2020) by improving upon their upper bound of $2O(d)$ on the sample complexity.
arXiv Detail & Related papers (2020-12-07T18:17:46Z) - Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU
Networks [48.32532049640782]
We study the problem of learning one-hidden-layer ReLU networks with $k$ hidden units on $mathbbRd$ under Gaussian marginals.
For the case of positive coefficients, we give the first-time algorithm for this learning problem for $k$ up to $tildeOOmega(sqrtlog d)$.
arXiv Detail & Related papers (2020-06-22T17:53:54Z) - Probably Approximately Correct Constrained Learning [135.48447120228658]
We develop a generalization theory based on the probably approximately correct (PAC) learning framework.
We show that imposing a learner does not make a learning problem harder in the sense that any PAC learnable class is also a constrained learner.
We analyze the properties of this solution and use it to illustrate how constrained learning can address problems in fair and robust classification.
arXiv Detail & Related papers (2020-06-09T19:59:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.