Reducing Adversarially Robust Learning to Non-Robust PAC Learning
- URL: http://arxiv.org/abs/2010.12039v1
- Date: Thu, 22 Oct 2020 20:28:35 GMT
- Title: Reducing Adversarially Robust Learning to Non-Robust PAC Learning
- Authors: Omar Montasser, Steve Hanneke, Nathan Srebro
- Abstract summary: We give a reduction that can robustly learn any hypothesis class using any non-robust learner.
The number of calls to $mathcalA$ depends logarithmically on the number of allowed adversarial perturbations per example.
- Score: 39.51923275855131
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of reducing adversarially robust learning to standard
PAC learning, i.e. the complexity of learning adversarially robust predictors
using access to only a black-box non-robust learner. We give a reduction that
can robustly learn any hypothesis class $\mathcal{C}$ using any non-robust
learner $\mathcal{A}$ for $\mathcal{C}$. The number of calls to $\mathcal{A}$
depends logarithmically on the number of allowed adversarial perturbations per
example, and we give a lower bound showing this is unavoidable.
Related papers
- Do PAC-Learners Learn the Marginal Distribution? [24.80812483480747]
We study a variant of PAC-Learning in which the adversary is restricted to a known family of marginal distributions $mathscrP$.
We show that TV-learning is emphequivalent to PAC-Learning.
arXiv Detail & Related papers (2023-02-13T11:42:58Z) - On Optimal Learning Under Targeted Data Poisoning [48.907813854832206]
In this work we aim to characterize the smallest achievable error $epsilon=epsilon(eta)$ by the learner in the presence of such an adversary.
Remarkably, we show that the upper bound can be attained by a deterministic learner.
arXiv Detail & Related papers (2022-10-06T06:49:48Z) - Cryptographic Hardness of Learning Halfspaces with Massart Noise [59.8587499110224]
We study the complexity of PAC learning halfspaces in the presence of Massart noise.
We show that no-time Massart halfspace learners can achieve error better than $Omega(eta)$, even if the optimal 0-1 error is small.
arXiv Detail & Related papers (2022-07-28T17:50:53Z) - Adversarial Robustness: What fools you makes you stronger [1.14219428942199]
We prove an exponential separation for the sample complexity between the PAC-learning model and a version of the Equivalence-Query-learning model.
We show that this separation has interesting implications for adversarial robustness.
We explore a vision of designing an adaptive defense that in the presence of an attacker computes a model that is provably robust.
arXiv Detail & Related papers (2021-02-10T15:00:24Z) - The Complexity of Adversarially Robust Proper Learning of Halfspaces
with Agnostic Noise [67.27523616312428]
We study the computational complexity of adversarially robust proper learning of halfspaces in the distribution-independent PAC model.
We give a computationally efficient learning algorithm and a nearly matching computational hardness result for this problem.
arXiv Detail & Related papers (2020-07-30T04:18:51Z) - Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU
Networks [48.32532049640782]
We study the problem of learning one-hidden-layer ReLU networks with $k$ hidden units on $mathbbRd$ under Gaussian marginals.
For the case of positive coefficients, we give the first-time algorithm for this learning problem for $k$ up to $tildeOOmega(sqrtlog d)$.
arXiv Detail & Related papers (2020-06-22T17:53:54Z) - Taking a hint: How to leverage loss predictors in contextual bandits? [63.546913998407405]
We study learning in contextual bandits with the help of loss predictors.
We show that the optimal regret is $mathcalO(minsqrtT, sqrtmathcalETfrac13)$ when $mathcalE$ is known.
arXiv Detail & Related papers (2020-03-04T07:36:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.