On Proper Learnability between Average- and Worst-case Robustness
- URL: http://arxiv.org/abs/2211.05656v5
- Date: Thu, 25 May 2023 13:48:50 GMT
- Title: On Proper Learnability between Average- and Worst-case Robustness
- Authors: Vinod Raman, Unique Subedi, Ambuj Tewari
- Abstract summary: Montasser et al. showed that finite VC dimension is not sufficient for proper adversarially robust PAC learning.
We give a family of robust loss relaxations under which VC classes are properly PAC learnable.
We show that for an existing and natural relaxation of the worst-case robust loss, finite VC dimension is not sufficient for proper learning.
- Score: 17.11922027966447
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, Montasser et al. [2019] showed that finite VC dimension is not
sufficient for proper adversarially robust PAC learning. In light of this
hardness, there is a growing effort to study what type of relaxations to the
adversarially robust PAC learning setup can enable proper learnability. In this
work, we initiate the study of proper learning under relaxations of the
worst-case robust loss. We give a family of robust loss relaxations under which
VC classes are properly PAC learnable with sample complexity close to what one
would require in the standard PAC learning setup. On the other hand, we show
that for an existing and natural relaxation of the worst-case robust loss,
finite VC dimension is not sufficient for proper learning. Lastly, we give new
generalization guarantees for the adversarially robust empirical risk
minimizer.
Related papers
- On the Computability of Robust PAC Learning [9.507869508188266]
We introduce the problem of robust computable PAC (robust CPAC) learning.
We show that learnability in this setup is not implied by the combination of its components.
We prove that its finiteness is necessary, but not sufficient for robust CPAC learnability.
arXiv Detail & Related papers (2024-06-14T16:20:04Z) - Seeing is not Believing: Robust Reinforcement Learning against Spurious
Correlation [57.351098530477124]
We consider one critical type of robustness against spurious correlation, where different portions of the state do not have correlations induced by unobserved confounders.
A model that learns such useless or even harmful correlation could catastrophically fail when the confounder in the test case deviates from the training one.
Existing robust algorithms that assume simple and unstructured uncertainty sets are therefore inadequate to address this challenge.
arXiv Detail & Related papers (2023-07-15T23:53:37Z) - Optimal Learners for Realizable Regression: PAC Learning and Online Learning [52.37726841759983]
We aim to characterize the statistical complexity of realizable regression both in the PAC learning setting and the online learning setting.
We first introduce a minimax instance optimal learner for realizable regression and propose a novel dimension that both qualitatively and quantitatively characterizes which classes of real-valued predictors are learnable.
In the context of online learning we provide a dimension that characterizes the minimax optimal instance optimal cumulative loss up to a constant factor and design an optimal online learner for realizable regression.
arXiv Detail & Related papers (2023-07-07T21:39:25Z) - Adversarially Robust PAC Learnability of Real-Valued Functions [19.54399554114989]
We show that classes of fat-shattering dimension are $ell_p$ learnable in $ell_p$ perturbation setting.
We introduce a novel agnostic agnostic sample scheme for real functions, which may be independent interest.
arXiv Detail & Related papers (2022-06-26T21:27:23Z) - Boosting Barely Robust Learners: A New Perspective on Adversarial
Robustness [30.301460075475344]
Barely robust learning algorithms learn predictors that are adversarially robust only on a small fraction.
Our proposed notion of barely robust learning requires perturbation with respect to a "larger" set.
arXiv Detail & Related papers (2022-02-11T22:07:36Z) - A Characterization of Semi-Supervised Adversarially-Robust PAC Learnability [57.502573663108535]
We study the problem of learning an adversarially robust predictor to test time attacks in the semi-supervised PAC model.
We show that there is a significant benefit in semi-supervised robust learning even in the worst-case distribution-free model.
arXiv Detail & Related papers (2022-02-11T03:01:45Z) - Probabilistically Robust Learning: Balancing Average- and Worst-case
Performance [105.87195436925722]
We propose a framework called robustness probabilistic that bridges the gap between the accurate, yet brittle average case and the robust, yet conservative worst case.
From a theoretical point of view, this framework overcomes the trade-offs between the performance and the sample-complexity of worst-case and average-case learning.
arXiv Detail & Related papers (2022-02-02T17:01:38Z) - Risk Minimization from Adaptively Collected Data: Guarantees for
Supervised and Policy Learning [57.88785630755165]
Empirical risk minimization (ERM) is the workhorse of machine learning, but its model-agnostic guarantees can fail when we use adaptively collected data.
We study a generic importance sampling weighted ERM algorithm for using adaptively collected data to minimize the average of a loss function over a hypothesis class.
For policy learning, we provide rate-optimal regret guarantees that close an open gap in the existing literature whenever exploration decays to zero.
arXiv Detail & Related papers (2021-06-03T09:50:13Z) - Probably Approximately Correct Constrained Learning [135.48447120228658]
We develop a generalization theory based on the probably approximately correct (PAC) learning framework.
We show that imposing a learner does not make a learning problem harder in the sense that any PAC learnable class is also a constrained learner.
We analyze the properties of this solution and use it to illustrate how constrained learning can address problems in fair and robust classification.
arXiv Detail & Related papers (2020-06-09T19:59:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.