Towards Generic and Controllable Attacks Against Object Detection
- URL: http://arxiv.org/abs/2307.12342v1
- Date: Sun, 23 Jul 2023 14:37:13 GMT
- Title: Towards Generic and Controllable Attacks Against Object Detection
- Authors: Guopeng Li, Yue Xu, Jian Ding, Gui-Song Xia
- Abstract summary: Existing adversarial attacks against Object Detectors (ODs) suffer from two inherent limitations.
We propose a generic white-box attack, LGP, to blind mainstream object detectors with controllable perturbations.
Experimentally, the proposed LGP successfully attacked sixteen state-of-the-art object detectors on MS-COCO and DOTA datasets.
- Score: 35.12702394150046
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing adversarial attacks against Object Detectors (ODs) suffer from two
inherent limitations. Firstly, ODs have complicated meta-structure designs,
hence most advanced attacks for ODs concentrate on attacking specific
detector-intrinsic structures, which makes it hard for them to work on other
detectors and motivates us to design a generic attack against ODs. Secondly,
most works against ODs make Adversarial Examples (AEs) by generalizing
image-level attacks from classification to detection, which brings redundant
computations and perturbations in semantically meaningless areas (e.g.,
backgrounds) and leads to an emergency for seeking controllable attacks for
ODs. To this end, we propose a generic white-box attack, LGP (local
perturbations with adaptively global attacks), to blind mainstream object
detectors with controllable perturbations. For a detector-agnostic attack, LGP
tracks high-quality proposals and optimizes three heterogeneous losses
simultaneously. In this way, we can fool the crucial components of ODs with a
part of their outputs without the limitations of specific structures. Regarding
controllability, we establish an object-wise constraint that exploits
foreground-background separation adaptively to induce the attachment of
perturbations to foregrounds. Experimentally, the proposed LGP successfully
attacked sixteen state-of-the-art object detectors on MS-COCO and DOTA
datasets, with promising imperceptibility and transferability obtained. Codes
are publicly released in https://github.com/liguopeng0923/LGP.git
Related papers
- Hi-ALPS -- An Experimental Robustness Quantification of Six LiDAR-based Object Detection Systems for Autonomous Driving [49.64902130083662]
3D object detection systems (OD) play a key role in the driving decisions of autonomous vehicles.
Adversarial examples are small, sometimes sophisticated perturbations in the input data that change, i.e. falsify, the prediction of the OD.
We quantify the robustness of six state-of-the-art 3D OD under different types of perturbations.
arXiv Detail & Related papers (2025-03-21T14:17:02Z) - AnywhereDoor: Multi-Target Backdoor Attacks on Object Detection [9.539021752700823]
AnywhereDoor is a multi-target backdoor attack for object detection.
It allows adversaries to make objects disappear, fabricate new ones or mislabel them, either across all object classes or specific ones.
It improves attack success rates by 26% compared to adaptations of existing methods for such flexible control.
arXiv Detail & Related papers (2025-03-09T09:24:24Z) - NumbOD: A Spatial-Frequency Fusion Attack Against Object Detectors [30.532420461413487]
We propose NumbOD, a spatial-frequency fusion attack against various object detectors (ODs)
We first design a dual-track attack target selection strategy to select high-quality bounding boxes from OD outputs for targeting.
We employ directional perturbations to shift and compress predicted boxes and change classification results to deceive ODs.
arXiv Detail & Related papers (2024-12-22T10:16:34Z) - Seamless Detection: Unifying Salient Object Detection and Camouflaged Object Detection [73.85890512959861]
We propose a task-agnostic framework to unify Salient Object Detection (SOD) and Camouflaged Object Detection (COD)
We design a simple yet effective contextual decoder involving the interval-layer and global context, which achieves an inference speed of 67 fps.
Experiments on public SOD and COD datasets demonstrate the superiority of our proposed framework in both supervised and unsupervised settings.
arXiv Detail & Related papers (2024-12-22T03:25:43Z) - Detector Collapse: Physical-World Backdooring Object Detection to Catastrophic Overload or Blindness in Autonomous Driving [17.637155085620634]
Detector Collapse (DC) is a brand-new backdoor attack paradigm tailored for object detection.
DC is designed to instantly incapacitate detectors (i.e., severely impairing detector's performance and culminating in a denial-of-service)
We introduce a novel poisoning strategy exploiting natural objects, enabling DC to act as a practical backdoor in real-world environments.
arXiv Detail & Related papers (2024-04-17T13:12:14Z) - Multi-granular Adversarial Attacks against Black-box Neural Ranking Models [111.58315434849047]
We create high-quality adversarial examples by incorporating multi-granular perturbations.
We transform the multi-granular attack into a sequential decision-making process.
Our attack method surpasses prevailing baselines in both attack effectiveness and imperceptibility.
arXiv Detail & Related papers (2024-04-02T02:08:29Z) - Attacking Important Pixels for Anchor-free Detectors [47.524554948433995]
Existing adversarial attacks on object detection focus on attacking anchor-based detectors.
We propose the first adversarial attack dedicated to anchor-free detectors.
Our proposed methods achieve state-of-the-art attack performance and transferability on both object detection and human pose estimation tasks.
arXiv Detail & Related papers (2023-01-26T23:03:03Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Transferable Adversarial Examples for Anchor Free Object Detection [44.7397139463144]
We present the first adversarial attack on anchor-free object detectors.
We leverage high-level semantic information to efficiently generate transferable adversarial examples.
Our proposed method achieves state-of-the-art performance and transferability.
arXiv Detail & Related papers (2021-06-03T06:38:15Z) - Relevance Attack on Detectors [24.318876747711055]
This paper focuses on high-transferable adversarial attacks on detectors, which are hard to attack in a black-box manner.
We are the first to suggest that the relevance map from interpreters for detectors is such a property.
Based on it, we design a Relevance Attack on Detectors (RAD), which achieves a state-of-the-art transferability.
arXiv Detail & Related papers (2020-08-16T02:44:25Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.