Sample Complexity Bounds for Robustly Learning Decision Lists against
Evasion Attacks
- URL: http://arxiv.org/abs/2205.06127v1
- Date: Thu, 12 May 2022 14:40:18 GMT
- Title: Sample Complexity Bounds for Robustly Learning Decision Lists against
Evasion Attacks
- Authors: Pascale Gourdeau, Varun Kanade, Marta Kwiatkowska and James Worrell
- Abstract summary: A fundamental problem in adversarial machine learning is to quantify how much training data is needed in the presence of evasion attacks.
We work with probability distributions on the input data that satisfy a Lipschitz condition: nearby points have similar probability.
For every fixed $k$ the class of $k$-decision lists has sample complexity against a $log(n)$-bounded adversary.
- Score: 25.832511407411637
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A fundamental problem in adversarial machine learning is to quantify how much
training data is needed in the presence of evasion attacks. In this paper we
address this issue within the framework of PAC learning, focusing on the class
of decision lists. Given that distributional assumptions are essential in the
adversarial setting, we work with probability distributions on the input data
that satisfy a Lipschitz condition: nearby points have similar probability. Our
key results illustrate that the adversary's budget (that is, the number of bits
it can perturb on each input) is a fundamental quantity in determining the
sample complexity of robust learning. Our first main result is a
sample-complexity lower bound: the class of monotone conjunctions (essentially
the simplest non-trivial hypothesis class on the Boolean hypercube) and any
superclass has sample complexity at least exponential in the adversary's
budget. Our second main result is a corresponding upper bound: for every fixed
$k$ the class of $k$-decision lists has polynomial sample complexity against a
$\log(n)$-bounded adversary. This sheds further light on the question of
whether an efficient PAC learning algorithm can always be used as an efficient
$\log(n)$-robust learning algorithm under the uniform distribution.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.