Zero-Query Transfer Attacks on Context-Aware Object Detectors
- URL: http://arxiv.org/abs/2203.15230v1
- Date: Tue, 29 Mar 2022 04:33:06 GMT
- Title: Zero-Query Transfer Attacks on Context-Aware Object Detectors
- Authors: Zikui Cai, Shantanu Rane, Alejandro E. Brito, Chengyu Song, Srikanth
V. Krishnamurthy, Amit K. Roy-Chowdhury, M. Salman Asif
- Abstract summary: Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
- Score: 95.18656036716972
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks perturb images such that a deep neural network produces
incorrect classification results. A promising approach to defend against
adversarial attacks on natural multi-object scenes is to impose a
context-consistency check, wherein, if the detected objects are not consistent
with an appropriately defined context, then an attack is suspected. Stronger
attacks are needed to fool such context-aware detectors. We present the first
approach for generating context-consistent adversarial attacks that can evade
the context-consistency check of black-box object detectors operating on
complex, natural scenes. Unlike many black-box attacks that perform repeated
attempts and open themselves to detection, we assume a "zero-query" setting,
where the attacker has no knowledge of the classification decisions of the
victim system. First, we derive multiple attack plans that assign incorrect
labels to victim objects in a context-consistent manner. Then we design and use
a novel data structure that we call the perturbation success probability
matrix, which enables us to filter the attack plans and choose the one most
likely to succeed. This final attack plan is implemented using a
perturbation-bounded adversarial attack algorithm. We compare our zero-query
attack against a few-query scheme that repeatedly checks if the victim system
is fooled. We also compare against state-of-the-art context-agnostic attacks.
Against a context-aware defense, the fooling rate of our zero-query approach is
significantly higher than context-agnostic approaches and higher than that
achievable with up to three rounds of the few-query scheme.
Related papers
- AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - GLOW: Global Layout Aware Attacks for Object Detection [27.46902978168904]
Adversarial attacks aim to perturb images such that a predictor outputs incorrect results.
We present first approach that copes with various attack requests by generating global layout-aware adversarial attacks.
In experiment, we design multiple types of attack requests and validate our ideas on MS validation set.
arXiv Detail & Related papers (2023-02-27T22:01:34Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Parallel Rectangle Flip Attack: A Query-based Black-box Attack against
Object Detection [89.08832589750003]
We propose a Parallel Rectangle Flip Attack (PRFA) via random search to avoid sub-optimal detection near the attacked region.
Our method can effectively and efficiently attack various popular object detectors, including anchor-based and anchor-free, and generate transferable adversarial examples.
arXiv Detail & Related papers (2022-01-22T06:00:17Z) - ADC: Adversarial attacks against object Detection that evade Context
consistency checks [55.8459119462263]
We show that even context consistency checks can be brittle to properly crafted adversarial examples.
We propose an adaptive framework to generate examples that subvert such defenses.
Our results suggest that how to robustly model context and check its consistency, is still an open problem.
arXiv Detail & Related papers (2021-10-24T00:25:09Z) - BAARD: Blocking Adversarial Examples by Testing for Applicability,
Reliability and Decidability [12.079529913120593]
Adversarial defenses protect machine learning models from adversarial attacks, but are often tailored to one type of model or attack.
We take inspiration from the concept of Applicability Domain in cheminformatics.
We propose a simple yet robust triple-stage data-driven framework that checks the input globally and locally.
arXiv Detail & Related papers (2021-05-02T15:24:33Z) - Composite Adversarial Attacks [57.293211764569996]
Adversarial attack is a technique for deceiving Machine Learning (ML) models.
In this paper, a new procedure called Composite Adrial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms.
CAA beats 10 top attackers on 11 diverse defenses with less elapsed time.
arXiv Detail & Related papers (2020-12-10T03:21:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.