RayS: A Ray Searching Method for Hard-label Adversarial Attack
- URL: http://arxiv.org/abs/2006.12792v2
- Date: Sat, 5 Sep 2020 18:17:34 GMT
- Title: RayS: A Ray Searching Method for Hard-label Adversarial Attack
- Authors: Jinghui Chen and Quanquan Gu
- Abstract summary: We present the Ray Searching attack (RayS), which greatly improves the hard-label attack effectiveness as well as efficiency.
RayS attack can also be used as a sanity check for possible "falsely robust" models.
- Score: 99.72117609513589
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks are vulnerable to adversarial attacks. Among different
attack settings, the most challenging yet the most practical one is the
hard-label setting where the attacker only has access to the hard-label output
(prediction label) of the target model. Previous attempts are neither effective
enough in terms of attack success rate nor efficient enough in terms of query
complexity under the widely used $L_\infty$ norm threat model. In this paper,
we present the Ray Searching attack (RayS), which greatly improves the
hard-label attack effectiveness as well as efficiency. Unlike previous works,
we reformulate the continuous problem of finding the closest decision boundary
into a discrete problem that does not require any zeroth-order gradient
estimation. In the meantime, all unnecessary searches are eliminated via a fast
check step. This significantly reduces the number of queries needed for our
hard-label attack. Moreover, interestingly, we found that the proposed RayS
attack can also be used as a sanity check for possible "falsely robust" models.
On several recently proposed defenses that claim to achieve the
state-of-the-art robust accuracy, our attack method demonstrates that the
current white-box/black-box attacks could still give a false sense of security
and the robust accuracy drop between the most popular PGD attack and RayS
attack could be as large as $28\%$. We believe that our proposed RayS attack
could help identify falsely robust models that beat most white-box/black-box
attacks.
Related papers
- BruSLeAttack: A Query-Efficient Score-Based Black-Box Sparse Adversarial Attack [22.408968332454062]
We study the unique, less-well understood problem of generating sparse adversarial samples simply by observing the score-based replies to model queries.
We develop the BruSLeAttack-a new, faster (more query-efficient) algorithm for the problem.
Our work facilitates faster evaluation of model vulnerabilities and raises our vigilance on the safety, security and reliability of deployed systems.
arXiv Detail & Related papers (2024-04-08T08:59:26Z) - Hard-label based Small Query Black-box Adversarial Attack [2.041108289731398]
We propose a new practical setting of hard label based attack with an optimisation process guided by a pretrained surrogate model.
We find the proposed method achieves approximately 5 times higher attack success rate compared to the benchmarks.
arXiv Detail & Related papers (2024-03-09T21:26:22Z) - Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack [96.50202709922698]
A practical evaluation method should be convenient (i.e., parameter-free), efficient (i.e., fewer iterations) and reliable.
We propose a parameter-free Adaptive Auto Attack (A$3$) evaluation method which addresses the efficiency and reliability in a test-time-training fashion.
arXiv Detail & Related papers (2022-03-10T04:53:54Z) - Parallel Rectangle Flip Attack: A Query-based Black-box Attack against
Object Detection [89.08832589750003]
We propose a Parallel Rectangle Flip Attack (PRFA) via random search to avoid sub-optimal detection near the attacked region.
Our method can effectively and efficiently attack various popular object detectors, including anchor-based and anchor-free, and generate transferable adversarial examples.
arXiv Detail & Related papers (2022-01-22T06:00:17Z) - Small Input Noise is Enough to Defend Against Query-based Black-box
Attacks [23.712389625037442]
In this paper, we show how Small Noise Defense can defend against query-based black-box attacks.
Even a small additive input noise can neutralize most query-based attacks.
Even with strong defense ability, SND almost maintains the original clean accuracy and computational speed.
arXiv Detail & Related papers (2021-01-13T01:45:59Z) - Composite Adversarial Attacks [57.293211764569996]
Adversarial attack is a technique for deceiving Machine Learning (ML) models.
In this paper, a new procedure called Composite Adrial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms.
CAA beats 10 top attackers on 11 diverse defenses with less elapsed time.
arXiv Detail & Related papers (2020-12-10T03:21:16Z) - Simple and Efficient Hard Label Black-box Adversarial Attacks in Low
Query Budget Regimes [80.9350052404617]
We propose a simple and efficient Bayesian Optimization(BO) based approach for developing black-box adversarial attacks.
Issues with BO's performance in high dimensions are avoided by searching for adversarial examples in a structured low-dimensional subspace.
Our proposed approach consistently achieves 2x to 10x higher attack success rate while requiring 10x to 20x fewer queries.
arXiv Detail & Related papers (2020-07-13T04:34:57Z) - Spanning Attack: Reinforce Black-box Attacks with Unlabeled Data [96.92837098305898]
Black-box attacks aim to craft adversarial perturbations by querying input-output pairs of machine learning models.
Black-box attacks often suffer from the issue of query inefficiency due to the high dimensionality of the input space.
We propose a novel technique called the spanning attack, which constrains adversarial perturbations in a low-dimensional subspace via spanning an auxiliary unlabeled dataset.
arXiv Detail & Related papers (2020-05-11T05:57:15Z) - Action-Manipulation Attacks Against Stochastic Bandits: Attacks and
Defense [45.408568528354216]
We introduce a new class of attack named action-manipulation attack.
In this attack, an adversary can change the action signal selected by the user.
To defend against this class of attacks, we introduce a novel algorithm that is robust to action-manipulation attacks.
arXiv Detail & Related papers (2020-02-19T04:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.