CorrAttack: Black-box Adversarial Attack with Structured Search
- URL: http://arxiv.org/abs/2010.01250v1
- Date: Sat, 3 Oct 2020 01:44:16 GMT
- Title: CorrAttack: Black-box Adversarial Attack with Structured Search
- Authors: Zhichao Huang, Yaowei Huang, Tong Zhang
- Abstract summary: We present a new method for score-based adversarial attack, where the attacker queries the loss-oracle of the target model.
Our method employs a parameterized search space with a structure that captures the relationship of the gradient of the loss function.
- Score: 20.30669137726607
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new method for score-based adversarial attack, where the
attacker queries the loss-oracle of the target model. Our method employs a
parameterized search space with a structure that captures the relationship of
the gradient of the loss function. We show that searching over the structured
space can be approximated by a time-varying contextual bandits problem, where
the attacker takes feature of the associated arm to make modifications of the
input, and receives an immediate reward as the reduction of the loss function.
The time-varying contextual bandits problem can then be solved by a Bayesian
optimization procedure, which can take advantage of the features of the
structured action space. The experiments on ImageNet and the Google Cloud
Vision API demonstrate that the proposed method achieves the state of the art
success rates and query efficiencies for both undefended and defended models.
Related papers
- BruSLeAttack: A Query-Efficient Score-Based Black-Box Sparse Adversarial Attack [22.408968332454062]
We study the unique, less-well understood problem of generating sparse adversarial samples simply by observing the score-based replies to model queries.
We develop the BruSLeAttack-a new, faster (more query-efficient) algorithm for the problem.
Our work facilitates faster evaluation of model vulnerabilities and raises our vigilance on the safety, security and reliability of deployed systems.
arXiv Detail & Related papers (2024-04-08T08:59:26Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Simultaneously Optimizing Perturbations and Positions for Black-box
Adversarial Patch Attacks [13.19708582519833]
Adversarial patch is an important form of real-world adversarial attack that brings serious risks to the robustness of deep neural networks.
Previous methods generate adversarial patches by either optimizing their perturbation values while fixing the pasting position or manipulating the position while fixing the patch's content.
We propose a novel method to simultaneously optimize the position and perturbation for an adversarial patch, and thus obtain a high attack success rate in the black-box setting.
arXiv Detail & Related papers (2022-12-26T02:48:37Z) - RamBoAttack: A Robust Query Efficient Deep Neural Network Decision
Exploit [9.93052896330371]
We develop a robust query efficient attack capable of avoiding entrapment in a local minimum and misdirection from noisy gradients.
The RamBoAttack is more robust to the different sample inputs available to an adversary and the targeted class.
arXiv Detail & Related papers (2021-12-10T01:25:24Z) - Geometrically Adaptive Dictionary Attack on Face Recognition [23.712389625037442]
We propose a strategy for query-efficient black-box attacks on face recognition.
Our core idea is to create an adversarial perturbation in the UV texture map and project it onto the face in the image.
We show overwhelming performance improvement in the experiments on the LFW and CPLFW datasets.
arXiv Detail & Related papers (2021-11-08T10:26:28Z) - Automated Decision-based Adversarial Attacks [48.01183253407982]
We consider the practical and challenging decision-based black-box adversarial setting.
Under this setting, the attacker can only acquire the final classification labels by querying the target model.
We propose to automatically discover decision-based adversarial attack algorithms.
arXiv Detail & Related papers (2021-05-09T13:15:10Z) - Online Model Selection: a Rested Bandit Formulation [49.69377391589057]
We introduce and analyze a best arm identification problem in the rested bandit setting.
We define a novel notion of regret for this problem, where we compare to the policy that always plays the arm having the smallest expected loss at the end of the game.
Unlike known model selection efforts in the recent bandit literature, our algorithm exploits the specific structure of the problem to learn the unknown parameters of the expected loss function.
arXiv Detail & Related papers (2020-12-07T08:23:08Z) - Attack Agnostic Adversarial Defense via Visual Imperceptible Bound [70.72413095698961]
This research aims to design a defense model that is robust within a certain bound against both seen and unseen adversarial attacks.
The proposed defense model is evaluated on the MNIST, CIFAR-10, and Tiny ImageNet databases.
The proposed algorithm is attack agnostic, i.e. it does not require any knowledge of the attack algorithm.
arXiv Detail & Related papers (2020-10-25T23:14:26Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.