Contextual adversarial attack against aerial detection in the physical
world
- URL: http://arxiv.org/abs/2302.13487v1
- Date: Mon, 27 Feb 2023 02:57:58 GMT
- Title: Contextual adversarial attack against aerial detection in the physical
world
- Authors: Jiawei Lian, Xiaofei Wang, Yuru Su, Mingyang Ma, Shaohui Mei
- Abstract summary: Deep Neural Networks (DNNs) have been extensively utilized in aerial detection.
Physical attacks have gradually become a hot issue due to they are more practical in the real world.
We propose an innovative contextual attack method against aerial detection in real scenarios.
- Score: 8.826711009649133
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) have been extensively utilized in aerial
detection. However, DNNs' sensitivity and vulnerability to maliciously
elaborated adversarial examples have progressively garnered attention.
Recently, physical attacks have gradually become a hot issue due to they are
more practical in the real world, which poses great threats to some
security-critical applications. In this paper, we take the first attempt to
perform physical attacks in contextual form against aerial detection in the
physical world. We propose an innovative contextual attack method against
aerial detection in real scenarios, which achieves powerful attack performance
and transfers well between various aerial object detectors without smearing or
blocking the interested objects to hide. Based on the findings that the
targets' contextual information plays an important role in aerial detection by
observing the detectors' attention maps, we propose to make full use of the
contextual area of the interested targets to elaborate contextual perturbations
for the uncovered attacks in real scenarios. Extensive proportionally scaled
experiments are conducted to evaluate the effectiveness of the proposed
contextual attack method, which demonstrates the proposed method's superiority
in both attack efficacy and physical practicality.
Related papers
- Eroding Trust In Aerial Imagery: Comprehensive Analysis and Evaluation
Of Adversarial Attacks In Geospatial Systems [24.953306643091484]
We show how adversarial attacks can degrade confidence in geospatial systems.
We empirically show their threat to remote sensing systems using high-quality SpaceNet datasets.
arXiv Detail & Related papers (2023-12-12T16:05:12Z) - Attention-Based Real-Time Defenses for Physical Adversarial Attacks in
Vision Applications [58.06882713631082]
Deep neural networks exhibit excellent performance in computer vision tasks, but their vulnerability to real-world adversarial attacks raises serious security concerns.
This paper proposes an efficient attention-based defense mechanism that exploits adversarial channel-attention to quickly identify and track malicious objects in shallow network layers.
It also introduces an efficient multi-frame defense framework, validating its efficacy through extensive experiments aimed at evaluating both defense performance and computational cost.
arXiv Detail & Related papers (2023-11-19T00:47:17Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - CBA: Contextual Background Attack against Optical Aerial Detection in
the Physical World [8.826711009649133]
Patch-based physical attacks have increasingly aroused concerns.
Most existing methods focus on obscuring targets captured on the ground, and some of these methods are simply extended to deceive aerial detectors.
We propose Contextual Background Attack (CBA), a novel physical attack framework against aerial detection, which can achieve strong attack efficacy and transferability in the physical world even without smudging the interested objects at all.
arXiv Detail & Related papers (2023-02-27T05:10:27Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - Targeted Attack on Deep RL-based Autonomous Driving with Learned Visual
Patterns [18.694795507945603]
Recent studies demonstrated the vulnerability of control policies learned through deep reinforcement learning against adversarial attacks.
This paper investigates the feasibility of targeted attacks through visually learned patterns placed on physical object in the environment.
arXiv Detail & Related papers (2021-09-16T04:59:06Z) - Physical Adversarial Attacks on an Aerial Imagery Object Detector [32.99554861896277]
In this work, we demonstrate one of the first efforts on physical adversarial attacks on aerial imagery.
We devised novel experiments and metrics to evaluate the efficacy of physical adversarial attacks against object detectors in aerial scenes.
Our results indicate the palpable threat posed by physical adversarial attacks towards deep neural networks for processing satellite imagery.
arXiv Detail & Related papers (2021-08-26T12:53:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.