GeoDA: a geometric framework for black-box adversarial attacks
- URL: http://arxiv.org/abs/2003.06468v1
- Date: Fri, 13 Mar 2020 20:03:01 GMT
- Title: GeoDA: a geometric framework for black-box adversarial attacks
- Authors: Ali Rahmati, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard, and
Huaiyu Dai
- Abstract summary: We propose a framework to generate adversarial examples in one of the most challenging black-box settings.
Our framework is based on the observation that the decision boundary of deep networks usually has a small mean curvature in the vicinity of data samples.
- Score: 79.52980486689287
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial examples are known as carefully perturbed images fooling image
classifiers. We propose a geometric framework to generate adversarial examples
in one of the most challenging black-box settings where the adversary can only
generate a small number of queries, each of them returning the top-$1$ label of
the classifier. Our framework is based on the observation that the decision
boundary of deep networks usually has a small mean curvature in the vicinity of
data samples. We propose an effective iterative algorithm to generate
query-efficient black-box perturbations with small $\ell_p$ norms for $p \ge
1$, which is confirmed via experimental evaluations on state-of-the-art natural
image classifiers. Moreover, for $p=2$, we theoretically show that our
algorithm actually converges to the minimal $\ell_2$-perturbation when the
curvature of the decision boundary is bounded. We also obtain the optimal
distribution of the queries over the iterations of the algorithm. Finally,
experimental results confirm that our principled black-box attack algorithm
performs better than state-of-the-art algorithms as it generates smaller
perturbations with a reduced number of queries.
Related papers
- Differentially-Private Hierarchical Clustering with Provable
Approximation Guarantees [79.59010418610625]
We study differentially private approximation algorithms for hierarchical clustering.
We show strong lower bounds for the problem: that any $epsilon$-DP algorithm must exhibit $O(|V|2/ epsilon)$-additive error for an input dataset.
We propose a private $1+o(1)$ approximation algorithm which also recovers the blocks exactly.
arXiv Detail & Related papers (2023-01-31T19:14:30Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - IWA: Integrated Gradient based White-box Attacks for Fooling Deep Neural
Networks [4.739554342067529]
Adversarial White-box Adversarial example generation algorithms (IWA): IFPA and IUA.
We propose two gradient based White-box Adversarial example generation algorithms (IWA): IFPA and IUA.
We verify the effectiveness of the proposed algorithms on both structured and unstructured datasets, and we compare them with five baseline generation algorithms.
arXiv Detail & Related papers (2021-02-03T16:10:42Z) - Byzantine-Resilient Non-Convex Stochastic Gradient Descent [61.6382287971982]
adversary-resilient distributed optimization, in which.
machines can independently compute gradients, and cooperate.
Our algorithm is based on a new concentration technique, and its sample complexity.
It is very practical: it improves upon the performance of all prior methods when no.
setting machines are present.
arXiv Detail & Related papers (2020-12-28T17:19:32Z) - An Empirical Study of Derivative-Free-Optimization Algorithms for
Targeted Black-Box Attacks in Deep Neural Networks [8.368543987898732]
This paper considers four pre-existing state-of-the-art DFO-based algorithms along with the introduction of a new algorithm built on BOBYQA.
We compare these algorithms in a variety of settings according to the fraction of images that they successfully misclassify.
Experiments disclose how the likelihood of finding an adversarial example depends on both the algorithm used and the setting of the attack.
arXiv Detail & Related papers (2020-12-03T13:32:20Z) - Adversarial Examples for $k$-Nearest Neighbor Classifiers Based on
Higher-Order Voronoi Diagrams [69.4411417775822]
Adversarial examples are a widely studied phenomenon in machine learning models.
We propose an algorithm for evaluating the adversarial robustness of $k$-nearest neighbor classification.
arXiv Detail & Related papers (2020-11-19T08:49:10Z) - An Empirical Process Approach to the Union Bound: Practical Algorithms
for Combinatorial and Linear Bandits [34.06611065493047]
This paper proposes near-optimal algorithms for the pure-exploration linear bandit problem in the fixed confidence and fixed budget settings.
We provide an algorithm whose sample complexity scales with the geometry of the instance and avoids an explicit union bound over the number of arms.
We also propose the first algorithm for linear bandits in the the fixed budget setting.
arXiv Detail & Related papers (2020-06-21T00:56:33Z) - Computationally efficient sparse clustering [67.95910835079825]
We provide a finite sample analysis of a new clustering algorithm based on PCA.
We show that it achieves the minimax optimal misclustering rate in the regime $|theta infty$.
arXiv Detail & Related papers (2020-05-21T17:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.