On computable learning of continuous features
- URL: http://arxiv.org/abs/2111.14630v1
- Date: Wed, 24 Nov 2021 02:28:21 GMT
- Title: On computable learning of continuous features
- Authors: Nathanael Ackerman and Julian Asilis and Jieqi Di and Cameron Freer
and Jean-Baptiste Tristan
- Abstract summary: We introduce definitions of computable PAC learning for binary classification over computable metric spaces.
We also give a presentation of a hypothesis class that does not admit any proper computable PAC learner with computable sample function.
- Score: 2.278415626131568
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce definitions of computable PAC learning for binary classification
over computable metric spaces. We provide sufficient conditions for learners
that are empirical risk minimizers (ERM) to be computable, and bound the strong
Weihrauch degree of an ERM learner under more general conditions. We also give
a presentation of a hypothesis class that does not admit any proper computable
PAC learner with computable sample function, despite the underlying class being
PAC learnable.
Related papers
- On the Computability of Robust PAC Learning [9.507869508188266]
We introduce the problem of robust computable PAC (robust CPAC) learning.
We show that learnability in this setup is not implied by the combination of its components.
We prove that its finiteness is necessary, but not sufficient for robust CPAC learnability.
arXiv Detail & Related papers (2024-06-14T16:20:04Z) - Collaborative Learning with Different Labeling Functions [7.228285747845779]
We study a variant of Collaborative PAC Learning, in which we aim to learn an accurate classifier for each of the $n$ data distributions.
We show that, when the data distributions satisfy a weaker realizability assumption, sample-efficient learning is still feasible.
arXiv Detail & Related papers (2024-02-16T04:32:22Z) - When is Agnostic Reinforcement Learning Statistically Tractable? [76.1408672715773]
A new complexity measure, called the emphspanning capacity, depends solely on the set $Pi$ and is independent of the MDP dynamics.
We show there exists a policy class $Pi$ with a bounded spanning capacity that requires a superpolynomial number of samples to learn.
This reveals a surprising separation for learnability between generative access and online access models.
arXiv Detail & Related papers (2023-10-09T19:40:54Z) - Optimal Learners for Realizable Regression: PAC Learning and Online Learning [52.37726841759983]
We aim to characterize the statistical complexity of realizable regression both in the PAC learning setting and the online learning setting.
We first introduce a minimax instance optimal learner for realizable regression and propose a novel dimension that both qualitatively and quantitatively characterizes which classes of real-valued predictors are learnable.
In the context of online learning we provide a dimension that characterizes the minimax optimal instance optimal cumulative loss up to a constant factor and design an optimal online learner for realizable regression.
arXiv Detail & Related papers (2023-07-07T21:39:25Z) - Learning in POMDPs is Sample-Efficient with Hindsight Observability [36.66596305441365]
POMDPs capture a broad class of decision making problems, but hardness results suggest that learning is intractable even in simple settings due to the inherent partial observability.
In many realistic problems, more information is either revealed or can be computed during some point of the learning process.
We formulate a setting (setshort) as a POMDP where the latent states are revealed to the learner in hindsight and only during training.
arXiv Detail & Related papers (2023-01-31T18:54:36Z) - On Leave-One-Out Conditional Mutual Information For Generalization [122.2734338600665]
We derive information theoretic generalization bounds for supervised learning algorithms based on a new measure of leave-one-out conditional mutual information (loo-CMI)
Contrary to other CMI bounds, our loo-CMI bounds can be computed easily and can be interpreted in connection to other notions such as classical leave-one-out cross-validation.
We empirically validate the quality of the bound by evaluating its predicted generalization gap in scenarios for deep learning.
arXiv Detail & Related papers (2022-07-01T17:58:29Z) - Measure Theoretic Approach to Nonuniform Learnability [16.467540842571328]
characterization of nonuniform learnability has been redefined using the measure theoretic approach.
introduction of a new algorithm, Generalize Measure Learnability framework, to implement this approach.
Many situations were presented, Hypothesis Classes that are countable where we can apply the GML framework.
arXiv Detail & Related papers (2020-11-01T01:03:26Z) - Probably Approximately Correct Constrained Learning [135.48447120228658]
We develop a generalization theory based on the probably approximately correct (PAC) learning framework.
We show that imposing a learner does not make a learning problem harder in the sense that any PAC learnable class is also a constrained learner.
We analyze the properties of this solution and use it to illustrate how constrained learning can address problems in fair and robust classification.
arXiv Detail & Related papers (2020-06-09T19:59:29Z) - Can We Learn Heuristics For Graphical Model Inference Using
Reinforcement Learning? [114.24881214319048]
We show that we can learn programs, i.e., policies, for solving inference in higher order Conditional Random Fields (CRFs) using reinforcement learning.
Our method solves inference tasks efficiently without imposing any constraints on the form of the potentials.
arXiv Detail & Related papers (2020-04-27T19:24:04Z) - A Theory of Usable Information Under Computational Constraints [103.5901638681034]
We propose a new framework for reasoning about information in complex systems.
Our foundation is based on a variational extension of Shannon's information theory.
We show that by incorporating computational constraints, $mathcalV$-information can be reliably estimated from data.
arXiv Detail & Related papers (2020-02-25T06:09:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.