Jacks of All Trades, Masters Of None: Addressing Distributional Shift
and Obtrusiveness via Transparent Patch Attacks
- URL: http://arxiv.org/abs/2005.00656v1
- Date: Fri, 1 May 2020 23:50:37 GMT
- Title: Jacks of All Trades, Masters Of None: Addressing Distributional Shift
and Obtrusiveness via Transparent Patch Attacks
- Authors: Neil Fendley, Max Lennon, I-Jeng Wang, Philippe Burlina, Nathan
Drenkow
- Abstract summary: We focus on the development of effective adversarial patch attacks.
We jointly address the antagonistic objectives of attack success and obtrusiveness via the design of novel semi-transparent patches.
- Score: 16.61388475767519
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We focus on the development of effective adversarial patch attacks and -- for
the first time -- jointly address the antagonistic objectives of attack success
and obtrusiveness via the design of novel semi-transparent patches. This work
is motivated by our pursuit of a systematic performance analysis of patch
attack robustness with regard to geometric transformations. Specifically, we
first elucidate a) key factors underpinning patch attack success and b) the
impact of distributional shift between training and testing/deployment when
cast under the Expectation over Transformation (EoT) formalism. By focusing our
analysis on three principal classes of transformations (rotation, scale, and
location), our findings provide quantifiable insights into the design of
effective patch attacks and demonstrate that scale, among all factors,
significantly impacts patch attack success. Working from these findings, we
then focus on addressing how to overcome the principal limitations of scale for
the deployment of attacks in real physical settings: namely the obtrusiveness
of large patches. Our strategy is to turn to the novel design of
irregularly-shaped, semi-transparent partial patches which we construct via a
new optimization process that jointly addresses the antagonistic goals of
mitigating obtrusiveness and maximizing effectiveness. Our study -- we hope --
will help encourage more focus in the community on the issues of obtrusiveness,
scale, and success in patch attacks.
Related papers
- DePatch: Towards Robust Adversarial Patch for Evading Person Detectors in the Real World [13.030804897732185]
We introduce the Decoupled adversarial Patch (DePatch) attack to address the self-coupling issue of adversarial patches.
Specifically, we divide the adversarial patch into block-wise segments, and reduce the inter-dependency among these segments.
We further introduce a border shifting operation and a progressive decoupling strategy to improve the overall attack capabilities.
arXiv Detail & Related papers (2024-08-13T04:25:13Z) - Bag of Tricks to Boost Adversarial Transferability [5.803095119348021]
adversarial examples generated under the white-box setting often exhibit low transferability across different models.
In this work, we find that several tiny changes in the existing adversarial attacks can significantly affect the attack performance.
Based on careful studies of existing adversarial attacks, we propose a bag of tricks to enhance adversarial transferability.
arXiv Detail & Related papers (2024-01-16T17:42:36Z) - Towards Robust Semantic Segmentation against Patch-based Attack via Attention Refinement [68.31147013783387]
We observe that the attention mechanism is vulnerable to patch-based adversarial attacks.
In this paper, we propose a Robust Attention Mechanism (RAM) to improve the robustness of the semantic segmentation model.
arXiv Detail & Related papers (2024-01-03T13:58:35Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - Simultaneously Optimizing Perturbations and Positions for Black-box
Adversarial Patch Attacks [13.19708582519833]
Adversarial patch is an important form of real-world adversarial attack that brings serious risks to the robustness of deep neural networks.
Previous methods generate adversarial patches by either optimizing their perturbation values while fixing the pasting position or manipulating the position while fixing the patch's content.
We propose a novel method to simultaneously optimize the position and perturbation for an adversarial patch, and thus obtain a high attack success rate in the black-box setting.
arXiv Detail & Related papers (2022-12-26T02:48:37Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Patch Attack Invariance: How Sensitive are Patch Attacks to 3D Pose? [7.717537870226507]
We develop a new metric called mean Attack Success over Transformations (mAST) to evaluate patch attack robustness and invariance.
We conduct a sensitivity analysis which provides important qualitative insights into attack effectiveness as a function of the 3D pose of a patch relative to the camera.
We provide new insights into the existence of a fundamental cutoff limit in patch attack effectiveness that depends on the extent of out-of-plane rotation angles.
arXiv Detail & Related papers (2021-08-16T17:02:38Z) - Guided Adversarial Attack for Evaluating and Enhancing Adversarial
Defenses [59.58128343334556]
We introduce a relaxation term to the standard loss, that finds more suitable gradient-directions, increases attack efficacy and leads to more efficient adversarial training.
We propose Guided Adversarial Margin Attack (GAMA), which utilizes function mapping of the clean image to guide the generation of adversaries.
We also propose Guided Adversarial Training (GAT), which achieves state-of-the-art performance amongst single-step defenses.
arXiv Detail & Related papers (2020-11-30T16:39:39Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.