Relevance Attack on Detectors
- URL: http://arxiv.org/abs/2008.06822v4
- Date: Tue, 23 Nov 2021 14:15:34 GMT
- Title: Relevance Attack on Detectors
- Authors: Sizhe Chen, Fan He, Xiaolin Huang, Kun Zhang
- Abstract summary: This paper focuses on high-transferable adversarial attacks on detectors, which are hard to attack in a black-box manner.
We are the first to suggest that the relevance map from interpreters for detectors is such a property.
Based on it, we design a Relevance Attack on Detectors (RAD), which achieves a state-of-the-art transferability.
- Score: 24.318876747711055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper focuses on high-transferable adversarial attacks on detectors,
which are hard to attack in a black-box manner, because of their
multiple-output characteristics and the diversity across architectures. To
pursue a high attack transferability, one plausible way is to find a common
property across detectors, which facilitates the discovery of common
weaknesses. We are the first to suggest that the relevance map from
interpreters for detectors is such a property. Based on it, we design a
Relevance Attack on Detectors (RAD), which achieves a state-of-the-art
transferability, exceeding existing results by above 20%. On MS COCO, the
detection mAPs for all 8 black-box architectures are more than halved and the
segmentation mAPs are also significantly influenced. Given the great
transferability of RAD, we generate the first adversarial dataset for object
detection and instance segmentation, i.e., Adversarial Objects in COntext
(AOCO), which helps to quickly evaluate and improve the robustness of
detectors.
Related papers
- Enhancing Infrared Small Target Detection Robustness with Bi-Level
Adversarial Framework [61.34862133870934]
We propose a bi-level adversarial framework to promote the robustness of detection in the presence of distinct corruptions.
Our scheme remarkably improves 21.96% IOU across a wide array of corruptions and notably promotes 4.97% IOU on the general benchmark.
arXiv Detail & Related papers (2023-09-03T06:35:07Z) - Illicit item detection in X-ray images for security applications [7.519872646378835]
Automated detection of contraband items in X-ray images can significantly increase public safety.
Modern computer vision algorithms relying on Deep Neural Networks (DNNs) have proven capable of undertaking this task.
This paper proposes a two-fold improvement of such algorithms for the X-ray analysis domain.
arXiv Detail & Related papers (2023-05-03T07:28:05Z) - Attacking Important Pixels for Anchor-free Detectors [47.524554948433995]
Existing adversarial attacks on object detection focus on attacking anchor-based detectors.
We propose the first adversarial attack dedicated to anchor-free detectors.
Our proposed methods achieve state-of-the-art attack performance and transferability on both object detection and human pose estimation tasks.
arXiv Detail & Related papers (2023-01-26T23:03:03Z) - TAD: Transfer Learning-based Multi-Adversarial Detection of Evasion
Attacks against Network Intrusion Detection Systems [0.7829352305480285]
We implement existing state-of-the-art models for intrusion detection.
We then attack those models with a set of chosen evasion attacks.
In an attempt to detect those adversarial attacks, we design and implement multiple transfer learning-based adversarial detectors.
arXiv Detail & Related papers (2022-10-27T18:02:58Z) - Context-Aware Transfer Attacks for Object Detection [51.65308857232767]
We present a new approach to generate context-aware attacks for object detectors.
We show that by using co-occurrence of objects and their relative locations and sizes as context information, we can successfully generate targeted mis-categorization attacks.
arXiv Detail & Related papers (2021-12-06T18:26:39Z) - ADC: Adversarial attacks against object Detection that evade Context
consistency checks [55.8459119462263]
We show that even context consistency checks can be brittle to properly crafted adversarial examples.
We propose an adaptive framework to generate examples that subvert such defenses.
Our results suggest that how to robustly model context and check its consistency, is still an open problem.
arXiv Detail & Related papers (2021-10-24T00:25:09Z) - Transferable Adversarial Examples for Anchor Free Object Detection [44.7397139463144]
We present the first adversarial attack on anchor-free object detectors.
We leverage high-level semantic information to efficiently generate transferable adversarial examples.
Our proposed method achieves state-of-the-art performance and transferability.
arXiv Detail & Related papers (2021-06-03T06:38:15Z) - Robust and Accurate Object Detection via Adversarial Learning [111.36192453882195]
This work augments the fine-tuning stage for object detectors by exploring adversarial examples.
Our approach boosts the performance of state-of-the-art EfficientDets by +1.1 mAP on the object detection benchmark.
arXiv Detail & Related papers (2021-03-23T19:45:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.