The Efficacy of Transformer-based Adversarial Attacks in Security
Domains
- URL: http://arxiv.org/abs/2310.11597v1
- Date: Tue, 17 Oct 2023 21:45:23 GMT
- Title: The Efficacy of Transformer-based Adversarial Attacks in Security
Domains
- Authors: Kunyang Li, Kyle Domico, Jean-Charles Noirot Ferrand, Patrick McDaniel
- Abstract summary: We evaluate the robustness of transformers to adversarial samples for system defenders and their adversarial strength for system attackers.
Our work emphasizes the importance of studying transformer architectures for attacking and defending models in security domains.
- Score: 0.7156877824959499
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Today, the security of many domains rely on the use of Machine Learning to
detect threats, identify vulnerabilities, and safeguard systems from attacks.
Recently, transformer architectures have improved the state-of-the-art
performance on a wide range of tasks such as malware detection and network
intrusion detection. But, before abandoning current approaches to transformers,
it is crucial to understand their properties and implications on cybersecurity
applications. In this paper, we evaluate the robustness of transformers to
adversarial samples for system defenders (i.e., resiliency to adversarial
perturbations generated on different types of architectures) and their
adversarial strength for system attackers (i.e., transferability of adversarial
samples generated by transformers to other target models). To that effect, we
first fine-tune a set of pre-trained transformer, Convolutional Neural Network
(CNN), and hybrid (an ensemble of transformer and CNN) models to solve
different downstream image-based tasks. Then, we use an attack algorithm to
craft 19,367 adversarial examples on each model for each task. The
transferability of these adversarial examples is measured by evaluating each
set on other models to determine which models offer more adversarial strength,
and consequently, more robustness against these attacks. We find that the
adversarial examples crafted on transformers offer the highest transferability
rate (i.e., 25.7% higher than the average) onto other models. Similarly,
adversarial examples crafted on other models have the lowest rate of
transferability (i.e., 56.7% lower than the average) onto transformers. Our
work emphasizes the importance of studying transformer architectures for
attacking and defending models in security domains, and suggests using them as
the primary architecture in transfer attack settings.
Related papers
- Adversarial Robustness of In-Context Learning in Transformers for Linear Regression [23.737606860443705]
This work investigates the vulnerability of in-context learning in transformers to textithijacking attacks focusing on the setting of linear regression tasks.
We first prove that single-layer linear transformers, known to implement gradient descent in-context, are non-robust and can be manipulated to output arbitrary predictions.
We then demonstrate that adversarial training enhances transformers' robustness against hijacking attacks, even when just applied during finetuning.
arXiv Detail & Related papers (2024-11-07T21:25:58Z) - Downstream Transfer Attack: Adversarial Attacks on Downstream Models with Pre-trained Vision Transformers [95.22517830759193]
This paper studies the transferability of such an adversarial vulnerability from a pre-trained ViT model to downstream tasks.
We show that DTA achieves an average attack success rate (ASR) exceeding 90%, surpassing existing methods by a huge margin.
arXiv Detail & Related papers (2024-08-03T08:07:03Z) - Transformation-Dependent Adversarial Attacks [15.374381635334897]
We introduce transformation-dependent adversarial attacks, a new class of threats where a single additive perturbation can trigger diverse, controllable mis-predictions.
Unlike traditional attacks with static effects, our perturbations embed metamorphic properties to enable different adversarial attacks as a function of the transformation parameters.
arXiv Detail & Related papers (2024-06-12T17:31:36Z) - Enhancing Adversarial Attacks: The Similar Target Method [6.293148047652131]
adversarial examples pose a threat to deep neural networks' applications.
Deep neural networks are vulnerable to adversarial examples, posing a threat to the models' applications and raising security concerns.
We propose a similar targeted attack method named Similar Target(ST)
arXiv Detail & Related papers (2023-08-21T14:16:36Z) - An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial
Transferability [26.39964737311377]
We propose an adaptive ensemble attack, dubbed AdaEA, to adaptively control the fusion of the outputs from each model.
We achieve considerable improvement over the existing ensemble attacks on various datasets.
arXiv Detail & Related papers (2023-08-05T15:12:36Z) - Safe Self-Refinement for Transformer-based Domain Adaptation [73.8480218879]
Unsupervised Domain Adaptation (UDA) aims to leverage a label-rich source domain to solve tasks on a related unlabeled target domain.
It is a challenging problem especially when a large domain gap lies between the source and target domains.
We propose a novel solution named SSRT (Safe Self-Refinement for Transformer-based domain adaptation), which brings improvement from two aspects.
arXiv Detail & Related papers (2022-04-16T00:15:46Z) - From Environmental Sound Representation to Robustness of 2D CNN Models
Against Adversarial Attacks [82.21746840893658]
This paper investigates the impact of different standard environmental sound representations (spectrograms) on the recognition performance and adversarial attack robustness of a victim residual convolutional neural network.
We show that while the ResNet-18 model trained on DWT spectrograms achieves a high recognition accuracy, attacking this model is relatively more costly for the adversary.
arXiv Detail & Related papers (2022-04-14T15:14:08Z) - Towards Transferable Adversarial Attacks on Vision Transformers [110.55845478440807]
Vision transformers (ViTs) have demonstrated impressive performance on a series of computer vision tasks, yet they still suffer from adversarial examples.
We introduce a dual attack framework, which contains a Pay No Attention (PNA) attack and a PatchOut attack, to improve the transferability of adversarial samples across different ViTs.
arXiv Detail & Related papers (2021-09-09T11:28:25Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z) - TREND: Transferability based Robust ENsemble Design [6.663641564969944]
We study the effect of network architecture, input, weight and activation quantization on transferability of adversarial samples.
We show that transferability is significantly hampered by input quantization between source and target.
We propose a new state-of-the-art ensemble attack to combat this.
arXiv Detail & Related papers (2020-08-04T13:38:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.