Provably Adversarially Robust Nearest Prototype Classifiers
- URL: http://arxiv.org/abs/2207.07208v1
- Date: Thu, 14 Jul 2022 21:22:30 GMT
- Title: Provably Adversarially Robust Nearest Prototype Classifiers
- Authors: V\'aclav Vor\'a\v{c}ek and Matthias Hein
- Abstract summary: Nearest prototypes (NPCs) assign to each input point the label of the nearest prototype with respect to a chosen distance metric.
Previous work could provide lower bounds on the minimal adversarial perturbation in the $ell_p$-threat model when using the same $ell_p$-distance for the NPCs.
In this paper we provide a complete discussion on the complexity when using $ell_p$-distances for decision and $ell_q$-threat models for certification.
- Score: 46.576144913096705
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Nearest prototype classifiers (NPCs) assign to each input point the label of
the nearest prototype with respect to a chosen distance metric. A direct
advantage of NPCs is that the decisions are interpretable. Previous work could
provide lower bounds on the minimal adversarial perturbation in the
$\ell_p$-threat model when using the same $\ell_p$-distance for the NPCs. In
this paper we provide a complete discussion on the complexity when using
$\ell_p$-distances for decision and $\ell_q$-threat models for certification
for $p,q \in \{1,2,\infty\}$. In particular we provide scalable algorithms for
the \emph{exact} computation of the minimal adversarial perturbation when using
$\ell_2$-distance and improved lower bounds in other cases. Using efficient
improved lower bounds we train our Provably adversarially robust NPC (PNPC),
for MNIST which have better $\ell_2$-robustness guarantees than neural
networks. Additionally, we show up to our knowledge the first certification
results w.r.t. to the LPIPS perceptual metric which has been argued to be a
more realistic threat model for image classification than $\ell_p$-balls. Our
PNPC has on CIFAR10 higher certified robust accuracy than the empirical robust
accuracy reported in (Laidlaw et al., 2021). The code is available in our
repository.
Related papers
- Fast Rates for Bandit PAC Multiclass Classification [73.17969992976501]
We study multiclass PAC learning with bandit feedback, where inputs are classified into one of $K$ possible labels and feedback is limited to whether or not the predicted labels are correct.
Our main contribution is in designing a novel learning algorithm for the agnostic $(varepsilon,delta)$PAC version of the problem.
arXiv Detail & Related papers (2024-06-18T08:54:04Z) - Near Sample-Optimal Reduction-based Policy Learning for Average Reward
MDP [58.13930707612128]
This work considers the sample complexity of obtaining an $varepsilon$-optimal policy in an average reward Markov Decision Process (AMDP)
We prove an upper bound of $widetilde O(H varepsilon-3 ln frac1delta)$ samples per state-action pair, where $H := sp(h*)$ is the span of bias of any optimal policy, $varepsilon$ is the accuracy and $delta$ is the failure probability.
arXiv Detail & Related papers (2022-12-01T15:57:58Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - On the complexity of All $\varepsilon$-Best Arms Identification [2.1485350418225244]
We consider the problem of identifying all the $varepsilon$-optimal arms in a finite multi-armed bandit with Gaussian rewards.
We propose a Track-and-Stop algorithm that identifies the set of $varepsilon$-good arms w.h.p and enjoys optimality (when $delta$ goes to zero) in terms of the expected sample complexity.
arXiv Detail & Related papers (2022-02-13T10:54:52Z) - Towards Certifying $\ell_\infty$ Robustness using Neural Networks with
$\ell_\infty$-dist Neurons [27.815886593870076]
We develop a principled neural network that inherently resists $ell_infty$ perturbations.
We consistently achieve state-of-the-art performance on commonly used datasets.
arXiv Detail & Related papers (2021-02-10T10:03:58Z) - Explicit Best Arm Identification in Linear Bandits Using No-Regret
Learners [17.224805430291177]
We study the problem of best arm identification in linearly parameterised multi-armed bandits.
We propose an explicitly implementable and provably order-optimal sample-complexity algorithm to solve this problem.
arXiv Detail & Related papers (2020-06-13T05:00:01Z) - Toward Adversarial Robustness via Semi-supervised Robust Training [93.36310070269643]
Adrial examples have been shown to be the severe threat to deep neural networks (DNNs)
We propose a novel defense method, the robust training (RT), by jointly minimizing two separated risks ($R_stand$ and $R_rob$)
arXiv Detail & Related papers (2020-03-16T02:14:08Z) - Curse of Dimensionality on Randomized Smoothing for Certifiable
Robustness [151.67113334248464]
We show that extending the smoothing technique to defend against other attack models can be challenging.
We present experimental results on CIFAR to validate our theory.
arXiv Detail & Related papers (2020-02-08T22:02:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.