On the Computability of Robust PAC Learning
- URL: http://arxiv.org/abs/2406.10161v1
- Date: Fri, 14 Jun 2024 16:20:04 GMT
- Title: On the Computability of Robust PAC Learning
- Authors: Pascale Gourdeau, Tosca Lechner, Ruth Urner,
- Abstract summary: We introduce the problem of robust computable PAC (robust CPAC) learning.
We show that learnability in this setup is not implied by the combination of its components.
We prove that its finiteness is necessary, but not sufficient for robust CPAC learnability.
- Score: 9.507869508188266
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We initiate the study of computability requirements for adversarially robust learning. Adversarially robust PAC-type learnability is by now an established field of research. However, the effects of computability requirements in PAC-type frameworks are only just starting to emerge. We introduce the problem of robust computable PAC (robust CPAC) learning and provide some simple sufficient conditions for this. We then show that learnability in this setup is not implied by the combination of its components: classes that are both CPAC and robustly PAC learnable are not necessarily robustly CPAC learnable. Furthermore, we show that the novel framework exhibits some surprising effects: for robust CPAC learnability it is not required that the robust loss is computably evaluable! Towards understanding characterizing properties, we introduce a novel dimension, the computable robust shattering dimension. We prove that its finiteness is necessary, but not sufficient for robust CPAC learnability. This might yield novel insights for the corresponding phenomenon in the context of robust PAC learnability, where insufficiency of the robust shattering dimension for learnability has been conjectured, but so far a resolution has remained elusive.
Related papers
- Weak Robust Compatibility Between Learning Algorithms and Counterfactual Explanation Generation Algorithms [1.3121410433987561]
Counterfactual explanation generation is a powerful method for Explainable Artificial Intelligence.
Previous literature has widely studied the robustness based on the perturbation of input instances.
We propose a more reasonable definition, Weak Robust Compatibility, based on the perspective of explanation strength.
arXiv Detail & Related papers (2024-05-31T08:03:52Z) - Optimal Learners for Realizable Regression: PAC Learning and Online Learning [52.37726841759983]
We aim to characterize the statistical complexity of realizable regression both in the PAC learning setting and the online learning setting.
We first introduce a minimax instance optimal learner for realizable regression and propose a novel dimension that both qualitatively and quantitatively characterizes which classes of real-valued predictors are learnable.
In the context of online learning we provide a dimension that characterizes the minimax optimal instance optimal cumulative loss up to a constant factor and design an optimal online learner for realizable regression.
arXiv Detail & Related papers (2023-07-07T21:39:25Z) - Find a witness or shatter: the landscape of computable PAC learning [63.26015736148707]
This paper contributes to the study of CPAC learnability by solving three open questions from recent papers.
Firstly, we prove that every CPAC learnable class is contained in a class which is properly CPAC learnable with a sample complexity.
Secondly, we show that there exists a decidable class of hypothesis which is properly learnable, but only with uncomputably fast growing sample complexity.
arXiv Detail & Related papers (2023-02-06T02:52:36Z) - Certified Interpretability Robustness for Class Activation Mapping [77.58769591550225]
We present CORGI, short for Certifiably prOvable Robustness Guarantees for Interpretability mapping.
CORGI is an algorithm that takes in an input image and gives a certifiable lower bound for the robustness of its CAM interpretability map.
We show the effectiveness of CORGI via a case study on traffic sign data, certifying lower bounds on the minimum adversarial perturbation.
arXiv Detail & Related papers (2023-01-26T18:58:11Z) - On Proper Learnability between Average- and Worst-case Robustness [17.11922027966447]
Montasser et al. showed that finite VC dimension is not sufficient for proper adversarially robust PAC learning.
We give a family of robust loss relaxations under which VC classes are properly PAC learnable.
We show that for an existing and natural relaxation of the worst-case robust loss, finite VC dimension is not sufficient for proper learning.
arXiv Detail & Related papers (2022-11-10T15:35:57Z) - The Role of Coverage in Online Reinforcement Learning [72.01066664756986]
We show that the mere existence of a data distribution with good coverage can enable sample-efficient online RL.
Existing complexity measures for online RL, including Bellman rank and Bellman-Eluder dimension, fail to optimally capture coverability.
We propose a new complexity measure, the sequential extrapolation coefficient, to provide a unification.
arXiv Detail & Related papers (2022-10-09T03:50:05Z) - A Characterization of Semi-Supervised Adversarially-Robust PAC Learnability [57.502573663108535]
We study the problem of learning an adversarially robust predictor to test time attacks in the semi-supervised PAC model.
We show that there is a significant benefit in semi-supervised robust learning even in the worst-case distribution-free model.
arXiv Detail & Related papers (2022-02-11T03:01:45Z) - On characterizations of learnability with computable learners [0.0]
We study computable PAC (CPAC) learning as introduced by Agarwal et al.
We give a characterization of a closely related notion of strong CPAC learning.
We consider undecidability of (computable) PAC learnability.
arXiv Detail & Related papers (2022-02-10T13:57:20Z) - On computable learning of continuous features [2.278415626131568]
We introduce definitions of computable PAC learning for binary classification over computable metric spaces.
We also give a presentation of a hypothesis class that does not admit any proper computable PAC learner with computable sample function.
arXiv Detail & Related papers (2021-11-24T02:28:21Z) - Probably Approximately Correct Constrained Learning [135.48447120228658]
We develop a generalization theory based on the probably approximately correct (PAC) learning framework.
We show that imposing a learner does not make a learning problem harder in the sense that any PAC learnable class is also a constrained learner.
We analyze the properties of this solution and use it to illustrate how constrained learning can address problems in fair and robust classification.
arXiv Detail & Related papers (2020-06-09T19:59:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.