Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial
Attack Framework
- URL: http://arxiv.org/abs/2110.15317v4
- Date: Thu, 8 Jun 2023 02:43:02 GMT
- Title: Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial
Attack Framework
- Authors: Lifan Yuan, Yichi Zhang, Yangyi Chen, Wei Wei
- Abstract summary: We propose a unified framework to craft textual adversarial samples.
In this paper, we instantiate our framework with an attack algorithm named Textual Projected Gradient Descent (T-PGD)
- Score: 17.17479625646699
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Despite recent success on various tasks, deep learning techniques still
perform poorly on adversarial examples with small perturbations. While
optimization-based methods for adversarial attacks are well-explored in the
field of computer vision, it is impractical to directly apply them in natural
language processing due to the discrete nature of the text. To address the
problem, we propose a unified framework to extend the existing
optimization-based adversarial attack methods in the vision domain to craft
textual adversarial samples. In this framework, continuously optimized
perturbations are added to the embedding layer and amplified in the forward
propagation process. Then the final perturbed latent representations are
decoded with a masked language model head to obtain potential adversarial
samples. In this paper, we instantiate our framework with an attack algorithm
named Textual Projected Gradient Descent (T-PGD). We find our algorithm
effective even using proxy gradient information. Therefore, we perform the more
challenging transfer black-box attack and conduct comprehensive experiments to
evaluate our attack algorithm with several models on three benchmark datasets.
Experimental results demonstrate that our method achieves overall better
performance and produces more fluent and grammatical adversarial samples
compared to strong baseline methods. The code and data are available at
\url{https://github.com/Phantivia/T-PGD}.
Related papers
- MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - READ: Improving Relation Extraction from an ADversarial Perspective [33.44949503459933]
We propose an adversarial training method specifically designed for relation extraction (RE)
Our approach introduces both sequence- and token-level perturbations to the sample and uses a separate perturbation vocabulary to improve the search for entity and context perturbations.
arXiv Detail & Related papers (2024-04-02T16:42:44Z) - SemRoDe: Macro Adversarial Training to Learn Representations That are Robust to Word-Level Attacks [29.942001958562567]
We propose a novel approach called Semantic Robust Defence (SemRoDeversa) to enhance the robustness of language models.
Our method learns a robust representation that bridges these two domains.
The results demonstrate promising state-of-the-art robustness.
arXiv Detail & Related papers (2024-03-27T10:24:25Z) - AICAttack: Adversarial Image Captioning Attack with Attention-Based Optimization [13.045125782574306]
This paper presents a novel adversarial attack strategy, AICAttack, designed to attack image captioning models through subtle perturbations on images.
operating within a black-box attack scenario, our algorithm requires no access to the target model's architecture, parameters, or gradient information.
We demonstrate AICAttack's effectiveness through extensive experiments on benchmark datasets against multiple victim models.
arXiv Detail & Related papers (2024-02-19T08:27:23Z) - GE-AdvGAN: Improving the transferability of adversarial samples by
gradient editing-based adversarial generative model [69.71629949747884]
Adversarial generative models, such as Generative Adversarial Networks (GANs), are widely applied for generating various types of data.
In this work, we propose a novel algorithm named GE-AdvGAN to enhance the transferability of adversarial samples.
arXiv Detail & Related papers (2024-01-11T16:43:16Z) - Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of
Language Models [86.02610674750345]
Adversarial GLUE (AdvGLUE) is a new multi-task benchmark to explore and evaluate the vulnerabilities of modern large-scale language models under various types of adversarial attacks.
We apply 14 adversarial attack methods to GLUE tasks to construct AdvGLUE, which is further validated by humans for reliable annotations.
All the language models and robust training methods we tested perform poorly on AdvGLUE, with scores lagging far behind the benign accuracy.
arXiv Detail & Related papers (2021-11-04T12:59:55Z) - Generating Natural Language Attacks in a Hard Label Black Box Setting [3.52359746858894]
We study an important and challenging task of attacking natural language processing models in a hard label black box setting.
We propose a decision-based attack strategy that crafts high quality adversarial examples on text classification and entailment tasks.
arXiv Detail & Related papers (2020-12-29T22:01:38Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z) - Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood
Ensemble [163.3333439344695]
Dirichlet Neighborhood Ensemble (DNE) is a randomized smoothing method for training a robust model to defense substitution-based attacks.
DNE forms virtual sentences by sampling embedding vectors for each word in an input sentence from a convex hull spanned by the word and its synonyms, and it augments them with the training data.
We demonstrate through extensive experimentation that our method consistently outperforms recently proposed defense methods by a significant margin across different network architectures and multiple data sets.
arXiv Detail & Related papers (2020-06-20T18:01:16Z) - BERT-ATTACK: Adversarial Attack Against BERT Using BERT [77.82947768158132]
Adrial attacks for discrete data (such as texts) are more challenging than continuous data (such as images)
We propose textbfBERT-Attack, a high-quality and effective method to generate adversarial samples.
Our method outperforms state-of-the-art attack strategies in both success rate and perturb percentage.
arXiv Detail & Related papers (2020-04-21T13:30:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.