Attention-Based Real-Time Defenses for Physical Adversarial Attacks in
Vision Applications
- URL: http://arxiv.org/abs/2311.11191v1
- Date: Sun, 19 Nov 2023 00:47:17 GMT
- Title: Attention-Based Real-Time Defenses for Physical Adversarial Attacks in
Vision Applications
- Authors: Giulio Rossolini, Alessandro Biondi and Giorgio Buttazzo
- Abstract summary: Deep neural networks exhibit excellent performance in computer vision tasks, but their vulnerability to real-world adversarial attacks raises serious security concerns.
This paper proposes an efficient attention-based defense mechanism that exploits adversarial channel-attention to quickly identify and track malicious objects in shallow network layers.
It also introduces an efficient multi-frame defense framework, validating its efficacy through extensive experiments aimed at evaluating both defense performance and computational cost.
- Score: 58.06882713631082
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep neural networks exhibit excellent performance in computer vision tasks,
but their vulnerability to real-world adversarial attacks, achieved through
physical objects that can corrupt their predictions, raises serious security
concerns for their application in safety-critical domains. Existing defense
methods focus on single-frame analysis and are characterized by high
computational costs that limit their applicability in multi-frame scenarios,
where real-time decisions are crucial.
To address this problem, this paper proposes an efficient attention-based
defense mechanism that exploits adversarial channel-attention to quickly
identify and track malicious objects in shallow network layers and mask their
adversarial effects in a multi-frame setting. This work advances the state of
the art by enhancing existing over-activation techniques for real-world
adversarial attacks to make them usable in real-time applications. It also
introduces an efficient multi-frame defense framework, validating its efficacy
through extensive experiments aimed at evaluating both defense performance and
computational cost.
Related papers
- EdgeShield: A Universal and Efficient Edge Computing Framework for Robust AI [8.688432179052441]
We propose an edge framework design to enable universal and efficient detection of adversarial attacks.
This framework incorporates an attention-based adversarial detection methodology and a lightweight detection network formation.
The results indicate an impressive 97.43% F-score can be achieved, demonstrating the framework's proficiency in detecting adversarial attacks.
arXiv Detail & Related papers (2024-08-08T02:57:55Z) - Embodied Laser Attack:Leveraging Scene Priors to Achieve Agent-based Robust Non-contact Attacks [13.726534285661717]
This paper introduces the Embodied Laser Attack (ELA), a novel framework that dynamically tailors non-contact laser attacks.
For the perception module, ELA has innovatively developed a local perspective transformation network, based on the intrinsic prior knowledge of traffic scenes.
For the decision and control module, ELA trains an attack agent with data-driven reinforcement learning instead of adopting time-consuming algorithms.
arXiv Detail & Related papers (2023-12-15T06:16:17Z) - Physical Adversarial Attacks For Camera-based Smart Systems: Current
Trends, Categorization, Applications, Research Challenges, and Future Outlook [2.1771693754641013]
We aim to provide a thorough understanding of the concept of physical adversarial attacks, analyzing their key characteristics and distinguishing features.
Our article delves into various physical adversarial attack methods, categorized according to their target tasks in different applications.
We assess the performance of these attack methods in terms of their effectiveness, stealthiness, and robustness.
arXiv Detail & Related papers (2023-08-11T15:02:19Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - GUARD: Graph Universal Adversarial Defense [54.81496179947696]
We present a simple yet effective method, named Graph Universal Adversarial Defense (GUARD)
GUARD protects each individual node from attacks with a universal defensive patch, which is generated once and can be applied to any node in a graph.
GUARD significantly improves robustness for several established GCNs against multiple adversarial attacks and outperforms state-of-the-art defense methods by large margins.
arXiv Detail & Related papers (2022-04-20T22:18:12Z) - Defending From Physically-Realizable Adversarial Attacks Through
Internal Over-Activation Analysis [61.68061613161187]
Z-Mask is a robust and effective strategy to improve the robustness of convolutional networks against adversarial attacks.
The presented defense relies on specific Z-score analysis performed on the internal network features to detect and mask the pixels corresponding to adversarial objects in the input image.
Additional experiments showed that Z-Mask is also robust against possible defense-aware attacks.
arXiv Detail & Related papers (2022-03-14T17:41:46Z) - Temporally-Transferable Perturbations: Efficient, One-Shot Adversarial
Attacks for Online Visual Object Trackers [81.90113217334424]
We propose a framework to generate a single temporally transferable adversarial perturbation from the object template image only.
This perturbation can then be added to every search image, which comes at virtually no cost, and still, successfully fool the tracker.
arXiv Detail & Related papers (2020-12-30T15:05:53Z) - AdvFoolGen: Creating Persistent Troubles for Deep Classifiers [17.709146615433458]
We present a new black-box attack termed AdvFoolGen, which can generate attacking images from the same feature space as that of the natural images.
We demonstrate the effectiveness and robustness of our attack in the face of state-of-the-art defense techniques.
arXiv Detail & Related papers (2020-07-20T21:27:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.