Adversarial Attacks in a Multi-view Setting: An Empirical Study of the
Adversarial Patches Inter-view Transferability
- URL: http://arxiv.org/abs/2110.04887v1
- Date: Sun, 10 Oct 2021 19:59:28 GMT
- Title: Adversarial Attacks in a Multi-view Setting: An Empirical Study of the
Adversarial Patches Inter-view Transferability
- Authors: Bilel Tarchoun, Ihsen Alouani, Anouar Ben Khalifa, Mohamed Ali Mahjoub
- Abstract summary: Adversarial attacks consist of additive noise to an input which can fool a detector.
Recent successful real-world printable adversarial patches were proven efficient against state-of-the-art neural networks.
We study the effect of view angle on the effectiveness of an adversarial patch.
- Score: 3.1542695050861544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While machine learning applications are getting mainstream owing to a
demonstrated efficiency in solving complex problems, they suffer from inherent
vulnerability to adversarial attacks. Adversarial attacks consist of additive
noise to an input which can fool a detector. Recently, successful real-world
printable adversarial patches were proven efficient against state-of-the-art
neural networks. In the transition from digital noise based attacks to
real-world physical attacks, the myriad of factors affecting object detection
will also affect adversarial patches. Among these factors, view angle is one of
the most influential, yet under-explored. In this paper, we study the effect of
view angle on the effectiveness of an adversarial patch. To this aim, we
propose the first approach that considers a multi-view context by combining
existing adversarial patches with a perspective geometric transformation in
order to simulate the effect of view angle changes. Our approach has been
evaluated on two datasets: the first dataset which contains most real world
constraints of a multi-view context, and the second dataset which empirically
isolates the effect of view angle. The experiments show that view angle
significantly affects the performance of adversarial patches, where in some
cases the patch loses most of its effectiveness. We believe that these results
motivate taking into account the effect of view angles in future adversarial
attacks, and open up new opportunities for adversarial defenses.
Related papers
- Breaking the Illusion: Real-world Challenges for Adversarial Patches in Object Detection [3.4233698915405544]
Adversarial attacks pose a significant threat to the robustness and reliability of machine learning systems.
This study investigates the performance of adversarial patches for the YOLO object detection network in the physical world.
arXiv Detail & Related papers (2024-10-23T11:16:11Z) - DePatch: Towards Robust Adversarial Patch for Evading Person Detectors in the Real World [13.030804897732185]
We introduce the Decoupled adversarial Patch (DePatch) attack to address the self-coupling issue of adversarial patches.
Specifically, we divide the adversarial patch into block-wise segments, and reduce the inter-dependency among these segments.
We further introduce a border shifting operation and a progressive decoupling strategy to improve the overall attack capabilities.
arXiv Detail & Related papers (2024-08-13T04:25:13Z) - Towards Robust Semantic Segmentation against Patch-based Attack via Attention Refinement [68.31147013783387]
We observe that the attention mechanism is vulnerable to patch-based adversarial attacks.
In this paper, we propose a Robust Attention Mechanism (RAM) to improve the robustness of the semantic segmentation model.
arXiv Detail & Related papers (2024-01-03T13:58:35Z) - Fool the Hydra: Adversarial Attacks against Multi-view Object Detection
Systems [3.4673556247932225]
Adrial patches exemplify the tangible manifestation of the threat posed by adversarial attacks on Machine Learning (ML) models in real-world scenarios.
Multiview object systems are able to combine data from multiple views, and reach reliable detection results even in difficult environments.
Despite its importance in real-world vision applications, the vulnerability of multiview systems to adversarial patches is not sufficiently investigated.
arXiv Detail & Related papers (2023-11-30T20:11:44Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [57.46379460600939]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Defending From Physically-Realizable Adversarial Attacks Through
Internal Over-Activation Analysis [61.68061613161187]
Z-Mask is a robust and effective strategy to improve the robustness of convolutional networks against adversarial attacks.
The presented defense relies on specific Z-score analysis performed on the internal network features to detect and mask the pixels corresponding to adversarial objects in the input image.
Additional experiments showed that Z-Mask is also robust against possible defense-aware attacks.
arXiv Detail & Related papers (2022-03-14T17:41:46Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Generating Adversarial yet Inconspicuous Patches with a Single Image [15.217367754000913]
We propose an approach to gen-erate adversarial yet inconspicuous patches with onesingle image.
In our approach, adversarial patches areproduced in a coarse-to-fine way with multiple scalesof generators and discriminators.
Our ap-proach shows strong attacking ability in both the white-box and black-box setting.
arXiv Detail & Related papers (2020-09-21T11:56:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.