GLOW: Global Layout Aware Attacks for Object Detection
- URL: http://arxiv.org/abs/2302.14166v1
- Date: Mon, 27 Feb 2023 22:01:34 GMT
- Title: GLOW: Global Layout Aware Attacks for Object Detection
- Authors: Jun Bao, Buyu Liu, Jianping Fan and Jun Yu
- Abstract summary: Adversarial attacks aim to perturb images such that a predictor outputs incorrect results.
We present first approach that copes with various attack requests by generating global layout-aware adversarial attacks.
In experiment, we design multiple types of attack requests and validate our ideas on MS validation set.
- Score: 27.46902978168904
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Adversarial attacks aims to perturb images such that a predictor outputs
incorrect results. Due to the limited research in structured attacks, imposing
consistency checks on natural multi-object scenes is a promising yet practical
defense against conventional adversarial attacks. More desired attacks, to this
end, should be able to fool defenses with such consistency checks. Therefore,
we present the first approach GLOW that copes with various attack requests by
generating global layout-aware adversarial attacks where both categorical and
geometric layout constraints are explicitly established. Specifically, we focus
on object detection task and given a victim image, GLOW first localizes victim
objects according to target labels. And then it generates multiple attack
plans, together with their context-consistency scores. Our proposed GLOW, on
the one hand, is capable of handling various types of requests, including
single or multiple victim objects, with or without specified victim objects. On
the other hand, it produces a consistency score for each attack plan,
reflecting the overall contextual consistency that both semantic category and
global scene layout are considered. In experiment, we design multiple types of
attack requests and validate our ideas on MS COCO validation set. Extensive
experimental results demonstrate that we can achieve about 40$\%$ average
relative improvement compared to state-of-the-art methods in conventional
single object attack request; Moreover, our method outperforms SOTAs
significantly on more generic attack requests by at least 30$\%$; Finally, our
method produces superior performance under challenging zero-query black-box
setting, or 30$\%$ better than SOTAs. Our code, model and attack requests would
be made available.
Related papers
- AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - AttackBench: Evaluating Gradient-based Attacks for Adversarial Examples [26.37278338032268]
Adrial examples are typically optimized with gradient-based attacks.
Each is shown to outperform its predecessors using different experimental setups.
This provides overly-optimistic and even biased evaluations.
arXiv Detail & Related papers (2024-04-30T11:19:05Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - PRAT: PRofiling Adversarial aTtacks [52.693011665938734]
We introduce a novel problem of PRofiling Adversarial aTtacks (PRAT)
Given an adversarial example, the objective of PRAT is to identify the attack used to generate it.
We use AID to devise a novel framework for the PRAT objective.
arXiv Detail & Related papers (2023-09-20T07:42:51Z) - To Make Yourself Invisible with Adversarial Semantic Contours [47.755808439588094]
Adversarial Semantic Contour (ASC) is an estimate of a Bayesian formulation of sparse attack with a deceived prior of object contour.
We show that ASC can corrupt the prediction of 9 modern detectors with different architectures.
We conclude with cautions about contour being the common weakness of object detectors with various architecture.
arXiv Detail & Related papers (2023-03-01T07:22:39Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.