Parallel Rectangle Flip Attack: A Query-based Black-box Attack against
Object Detection
- URL: http://arxiv.org/abs/2201.08970v1
- Date: Sat, 22 Jan 2022 06:00:17 GMT
- Title: Parallel Rectangle Flip Attack: A Query-based Black-box Attack against
Object Detection
- Authors: Siyuan Liang, Baoyuan Wu, Yanbo Fan, Xingxing Wei, Xiaochun Cao
- Abstract summary: We propose a Parallel Rectangle Flip Attack (PRFA) via random search to avoid sub-optimal detection near the attacked region.
Our method can effectively and efficiently attack various popular object detectors, including anchor-based and anchor-free, and generate transferable adversarial examples.
- Score: 89.08832589750003
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object detection has been widely used in many safety-critical tasks, such as
autonomous driving. However, its vulnerability to adversarial examples has not
been sufficiently studied, especially under the practical scenario of black-box
attacks, where the attacker can only access the query feedback of predicted
bounding-boxes and top-1 scores returned by the attacked model. Compared with
black-box attack to image classification, there are two main challenges in
black-box attack to detection. Firstly, even if one bounding-box is
successfully attacked, another sub-optimal bounding-box may be detected near
the attacked bounding-box. Secondly, there are multiple bounding-boxes, leading
to very high attack cost. To address these challenges, we propose a Parallel
Rectangle Flip Attack (PRFA) via random search. We explain the difference
between our method with other attacks in Fig.~\ref{fig1}. Specifically, we
generate perturbations in each rectangle patch to avoid sub-optimal detection
near the attacked region. Besides, utilizing the observation that adversarial
perturbations mainly locate around objects' contours and critical points under
white-box attacks, the search space of attacked rectangles is reduced to
improve the attack efficiency. Moreover, we develop a parallel mechanism of
attacking multiple rectangles simultaneously to further accelerate the attack
process. Extensive experiments demonstrate that our method can effectively and
efficiently attack various popular object detectors, including anchor-based and
anchor-free, and generate transferable adversarial examples.
Related papers
- AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - CGBA: Curvature-aware Geometric Black-box Attack [39.63633212337113]
Decision-based black-box attacks often necessitate a large number of queries to craft an adversarial example.
We propose a novel query-efficient curvature-aware geometric decision-based black-box attack (CGBA)
We develop a new query-efficient variant, CGBA-H, that is adapted for the targeted attack.
arXiv Detail & Related papers (2023-08-06T17:18:04Z) - Ensemble-based Blackbox Attacks on Dense Prediction [16.267479602370543]
We show that a carefully designed ensemble can create effective attacks for a number of victim models.
In particular, we show that normalization of the weights for individual models plays a critical role in the success of the attacks.
Our proposed method can also generate a single perturbation that can fool multiple blackbox detection and segmentation models simultaneously.
arXiv Detail & Related papers (2023-03-25T00:08:03Z) - T-SEA: Transfer-based Self-Ensemble Attack on Object Detection [9.794192858806905]
We propose a single-model transfer-based black-box attack on object detection, utilizing only one model to achieve a high-transferability adversarial attack on multiple black-box detectors.
We analogize patch optimization with regular model optimization, proposing a series of self-ensemble approaches on the input data, the attacked model, and the adversarial patch.
arXiv Detail & Related papers (2022-11-16T10:27:06Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - Local Black-box Adversarial Attacks: A Query Efficient Approach [64.98246858117476]
Adrial attacks have threatened the application of deep neural networks in security-sensitive scenarios.
We propose a novel framework to perturb the discriminative areas of clean examples only within limited queries in black-box attacks.
We conduct extensive experiments to show that our framework can significantly improve the query efficiency during black-box perturbing with a high attack success rate.
arXiv Detail & Related papers (2021-01-04T15:32:16Z) - RayS: A Ray Searching Method for Hard-label Adversarial Attack [99.72117609513589]
We present the Ray Searching attack (RayS), which greatly improves the hard-label attack effectiveness as well as efficiency.
RayS attack can also be used as a sanity check for possible "falsely robust" models.
arXiv Detail & Related papers (2020-06-23T07:01:50Z) - Spanning Attack: Reinforce Black-box Attacks with Unlabeled Data [96.92837098305898]
Black-box attacks aim to craft adversarial perturbations by querying input-output pairs of machine learning models.
Black-box attacks often suffer from the issue of query inefficiency due to the high dimensionality of the input space.
We propose a novel technique called the spanning attack, which constrains adversarial perturbations in a low-dimensional subspace via spanning an auxiliary unlabeled dataset.
arXiv Detail & Related papers (2020-05-11T05:57:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.