A Parameterized Theory of PAC Learning
- URL: http://arxiv.org/abs/2304.14058v1
- Date: Thu, 27 Apr 2023 09:39:30 GMT
- Title: A Parameterized Theory of PAC Learning
- Authors: Cornelius Brand, Robert Ganian, Kirill Simonov
- Abstract summary: Probably Approximately Correct (i.e., PAC) learning is a core concept of sample complexity theory.
We develop a theory of parameterized PAC learning which allows us to shed new light on several recent PAC learning results that incorporated elements of parameterized complexity.
- Score: 19.686465068713467
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Probably Approximately Correct (i.e., PAC) learning is a core concept of
sample complexity theory, and efficient PAC learnability is often seen as a
natural counterpart to the class P in classical computational complexity. But
while the nascent theory of parameterized complexity has allowed us to push
beyond the P-NP ``dichotomy'' in classical computational complexity and
identify the exact boundaries of tractability for numerous problems, there is
no analogue in the domain of sample complexity that could push beyond efficient
PAC learnability.
As our core contribution, we fill this gap by developing a theory of
parameterized PAC learning which allows us to shed new light on several recent
PAC learning results that incorporated elements of parameterized complexity.
Within the theory, we identify not one but two notions of fixed-parameter
learnability that both form distinct counterparts to the class FPT -- the core
concept at the center of the parameterized complexity paradigm -- and develop
the machinery required to exclude fixed-parameter learnability. We then
showcase the applications of this theory to identify refined boundaries of
tractability for CNF and DNF learning as well as for a range of learning
problems on graphs.
Related papers
- Separable Power of Classical and Quantum Learning Protocols Through the Lens of No-Free-Lunch Theorem [70.42372213666553]
The No-Free-Lunch (NFL) theorem quantifies problem- and data-independent generalization errors regardless of the optimization process.
We categorize a diverse array of quantum learning algorithms into three learning protocols designed for learning quantum dynamics under a specified observable.
Our derived NFL theorems demonstrate quadratic reductions in sample complexity across CLC-LPs, ReQu-LPs, and Qu-LPs.
We attribute this performance discrepancy to the unique capacity of quantum-related learning protocols to indirectly utilize information concerning the global phases of non-orthogonal quantum states.
arXiv Detail & Related papers (2024-05-12T09:05:13Z) - A Theory of Unsupervised Speech Recognition [60.12287608968879]
Unsupervised speech recognition (ASR-U) is the problem of learning automatic speech recognition systems from unpaired speech-only and text-only corpora.
We propose a general theoretical framework to study the properties of ASR-U systems based on random matrix theory and the theory of neural tangent kernels.
arXiv Detail & Related papers (2023-06-09T08:12:27Z) - Learnability with PAC Semantics for Multi-agent Beliefs [38.88111785113001]
The tension between deduction and induction is perhaps the most fundamental issue in areas such as philosophy, cognition and artificial intelligence.
Valiant recognised that the challenge of learning should be integrated with deduction.
Although weaker than classical entailment, it allows for a powerful model-theoretic framework for answering queries.
arXiv Detail & Related papers (2023-06-08T18:22:46Z) - Balancing Explainability-Accuracy of Complex Models [8.402048778245165]
We introduce a new approach for complex models based on the co-relation impact.
We propose approaches for both scenarios of independent features and dependent features.
We provide an upper bound of the complexity of our proposed approach for the dependent features.
arXiv Detail & Related papers (2023-05-23T14:20:38Z) - Learnability, Sample Complexity, and Hypothesis Class Complexity for
Regression Models [10.66048003460524]
This work is inspired by the foundation of PAC and is motivated by the existing regression learning issues.
The proposed approach, denoted by epsilon-Confidence Approximately Correct (epsilon CoAC), utilizes Kullback Leibler divergence (relative entropy)
It enables the learner to compare hypothesis classes of different complexity orders and choose among them the optimum with the minimum epsilon.
arXiv Detail & Related papers (2023-03-28T15:59:12Z) - Scalable PAC-Bayesian Meta-Learning via the PAC-Optimal Hyper-Posterior:
From Theory to Practice [54.03076395748459]
A central question in the meta-learning literature is how to regularize to ensure generalization to unseen tasks.
We present a generalization bound for meta-learning, which was first derived by Rothfuss et al.
We provide a theoretical analysis and empirical case study under which conditions and to what extent these guarantees for meta-learning improve upon PAC-Bayesian per-task learning bounds.
arXiv Detail & Related papers (2022-11-14T08:51:04Z) - Quantum Parameterized Complexity [1.01129133945787]
We introduce the quantum analogues of a range of parameterized complexity classes.
This framework exposes a rich classification of the complexity of parameterized versions of QMA-hard problems.
arXiv Detail & Related papers (2022-03-15T15:34:38Z) - Demystify Optimization and Generalization of Over-parameterized
PAC-Bayesian Learning [20.295960197612743]
PAC-Bayesian is an analysis framework where the training error can be expressed as the weighted average of the hypotheses in the posterior distribution.
We show that when PAC-Bayes learning is applied, the convergence result corresponds to solving a kernel ridge regression.
We further characterize the uniform PAC-Bayesian generalization bound which improves over the Rademacher complexity-based bound for non-probabilistic neural network.
arXiv Detail & Related papers (2022-02-04T03:49:11Z) - Quantum communication complexity beyond Bell nonlocality [87.70068711362255]
Efficient distributed computing offers a scalable strategy for solving resource-demanding tasks.
Quantum resources are well-suited to this task, offering clear strategies that can outperform classical counterparts.
We prove that a new class of communication complexity tasks can be associated to Bell-like inequalities.
arXiv Detail & Related papers (2021-06-11T18:00:09Z) - Probably Approximately Correct Constrained Learning [135.48447120228658]
We develop a generalization theory based on the probably approximately correct (PAC) learning framework.
We show that imposing a learner does not make a learning problem harder in the sense that any PAC learnable class is also a constrained learner.
We analyze the properties of this solution and use it to illustrate how constrained learning can address problems in fair and robust classification.
arXiv Detail & Related papers (2020-06-09T19:59:29Z) - Provably Efficient Exploration for Reinforcement Learning Using
Unsupervised Learning [96.78504087416654]
Motivated by the prevailing paradigm of using unsupervised learning for efficient exploration in reinforcement learning (RL) problems, we investigate when this paradigm is provably efficient.
We present a general algorithmic framework that is built upon two components: an unsupervised learning algorithm and a noregret tabular RL algorithm.
arXiv Detail & Related papers (2020-03-15T19:23:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.