Patch Attack Invariance: How Sensitive are Patch Attacks to 3D Pose?
- URL: http://arxiv.org/abs/2108.07229v1
- Date: Mon, 16 Aug 2021 17:02:38 GMT
- Title: Patch Attack Invariance: How Sensitive are Patch Attacks to 3D Pose?
- Authors: Max Lennon, Nathan Drenkow, Philippe Burlina
- Abstract summary: We develop a new metric called mean Attack Success over Transformations (mAST) to evaluate patch attack robustness and invariance.
We conduct a sensitivity analysis which provides important qualitative insights into attack effectiveness as a function of the 3D pose of a patch relative to the camera.
We provide new insights into the existence of a fundamental cutoff limit in patch attack effectiveness that depends on the extent of out-of-plane rotation angles.
- Score: 7.717537870226507
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Perturbation-based attacks, while not physically realizable, have been the
main emphasis of adversarial machine learning (ML) research. Patch-based
attacks by contrast are physically realizable, yet most work has focused on 2D
domain with recent forays into 3D. Characterizing the robustness properties of
patch attacks and their invariance to 3D pose is important, yet not fully
elucidated, and is the focus of this paper. To this end, several contributions
are made here: A) we develop a new metric called mean Attack Success over
Transformations (mAST) to evaluate patch attack robustness and invariance; and
B), we systematically assess robustness of patch attacks to 3D position and
orientation for various conditions; in particular, we conduct a sensitivity
analysis which provides important qualitative insights into attack
effectiveness as a function of the 3D pose of a patch relative to the camera
(rotation, translation) and sets forth some properties for patch attack 3D
invariance; and C), we draw novel qualitative conclusions including: 1) we
demonstrate that for some 3D transformations, namely rotation and loom,
increasing the training distribution support yields an increase in patch
success over the full range at test time. 2) We provide new insights into the
existence of a fundamental cutoff limit in patch attack effectiveness that
depends on the extent of out-of-plane rotation angles. These findings should
collectively guide future design of 3D patch attacks and defenses.
Related papers
- PAD: Patch-Agnostic Defense against Adversarial Patch Attacks [36.865204327754626]
Adversarial patch attacks present a significant threat to real-world object detectors.
We show two inherent characteristics of adversarial patches, semantic independence and spatial heterogeneity.
We propose PAD, a novel adversarial patch localization and removal method that does not require prior knowledge or additional training.
arXiv Detail & Related papers (2024-04-25T09:32:34Z) - Hide in Thicket: Generating Imperceptible and Rational Adversarial
Perturbations on 3D Point Clouds [62.94859179323329]
Adrial attack methods based on point manipulation for 3D point cloud classification have revealed the fragility of 3D models.
We propose a novel shape-based adversarial attack method, HiT-ADV, which conducts a two-stage search for attack regions based on saliency and imperceptibility perturbation scores.
We propose that by employing benign resampling and benign rigid transformations, we can further enhance physical adversarial strength with little sacrifice to imperceptibility.
arXiv Detail & Related papers (2024-03-08T12:08:06Z) - AdvMono3D: Advanced Monocular 3D Object Detection with Depth-Aware
Robust Adversarial Training [64.14759275211115]
We propose a depth-aware robust adversarial training method for monocular 3D object detection, dubbed DART3D.
Our adversarial training approach capitalizes on the inherent uncertainty, enabling the model to significantly improve its robustness against adversarial attacks.
arXiv Detail & Related papers (2023-09-03T07:05:32Z) - A Comprehensive Study of the Robustness for LiDAR-based 3D Object
Detectors against Adversarial Attacks [84.10546708708554]
3D object detectors are increasingly crucial for security-critical tasks.
It is imperative to understand their robustness against adversarial attacks.
This paper presents the first comprehensive evaluation and analysis of the robustness of LiDAR-based 3D detectors under adversarial attacks.
arXiv Detail & Related papers (2022-12-20T13:09:58Z) - Suppress with a Patch: Revisiting Universal Adversarial Patch Attacks
against Object Detection [2.577744341648085]
Adversarial patch-based attacks aim to fool a neural network with an intentionally generated noise.
In this work, we perform an in-depth analysis of different patch generation parameters.
Experiments have shown, that inserting a patch inside a window of increasing size during training leads to a significant increase in attack strength.
arXiv Detail & Related papers (2022-09-27T12:59:19Z) - Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection [142.24869736769432]
Adversarial patch attacks pose a serious threat to state-of-the-art object detectors.
We propose Segment and Complete defense (SAC), a framework for defending object detectors against patch attacks.
We show SAC can significantly reduce the targeted attack success rate of physical patch attacks.
arXiv Detail & Related papers (2021-12-08T19:18:48Z) - Generative Dynamic Patch Attack [6.1863763890100065]
We propose an end-to-end patch attack algorithm, Generative Dynamic Patch Attack (GDPA)
GDPA generates both patch pattern and patch location adversarially for each input image.
Experiments on VGGFace, Traffic Sign and ImageNet show that GDPA achieves higher attack success rates than state-of-the-art patch attacks.
arXiv Detail & Related papers (2021-11-08T04:15:34Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Learning Transferable 3D Adversarial Cloaks for Deep Trained Detectors [72.7633556669675]
This paper presents a novel patch-based adversarial attack pipeline that trains adversarial patches on 3D human meshes.
Unlike existing adversarial patches, our new 3D adversarial patch is shown to fool state-of-the-art deep object detectors robustly under varying views.
arXiv Detail & Related papers (2021-04-22T14:36:08Z) - Jacks of All Trades, Masters Of None: Addressing Distributional Shift
and Obtrusiveness via Transparent Patch Attacks [16.61388475767519]
We focus on the development of effective adversarial patch attacks.
We jointly address the antagonistic objectives of attack success and obtrusiveness via the design of novel semi-transparent patches.
arXiv Detail & Related papers (2020-05-01T23:50:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.