Imitation Attacks and Defenses for Black-box Machine Translation Systems
- URL: http://arxiv.org/abs/2004.15015v3
- Date: Sun, 3 Jan 2021 19:05:24 GMT
- Title: Imitation Attacks and Defenses for Black-box Machine Translation Systems
- Authors: Eric Wallace, Mitchell Stern, Dawn Song
- Abstract summary: Black-box machine translation (MT) systems have high commercial value and errors can be costly.
We show that MT systems can be stolen by querying them with monolingual sentences and training models to imitate their outputs.
We propose a defense that modifies translation outputs in order to misdirect the optimization of imitation models.
- Score: 86.92681013449682
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversaries may look to steal or attack black-box NLP systems, either for
financial gain or to exploit model errors. One setting of particular interest
is machine translation (MT), where models have high commercial value and errors
can be costly. We investigate possible exploits of black-box MT systems and
explore a preliminary defense against such threats. We first show that MT
systems can be stolen by querying them with monolingual sentences and training
models to imitate their outputs. Using simulated experiments, we demonstrate
that MT model stealing is possible even when imitation models have different
input data or architectures than their target models. Applying these ideas, we
train imitation models that reach within 0.6 BLEU of three production MT
systems on both high-resource and low-resource language pairs. We then leverage
the similarity of our imitation models to transfer adversarial examples to the
production systems. We use gradient-based attacks that expose inputs which lead
to semantically-incorrect translations, dropped content, and vulgar model
outputs. To mitigate these vulnerabilities, we propose a defense that modifies
translation outputs in order to misdirect the optimization of imitation models.
This defense degrades the adversary's BLEU score and attack success rate at
some cost in the defender's BLEU and inference speed.
Related papers
- A Classification-Guided Approach for Adversarial Attacks against Neural
Machine Translation [66.58025084857556]
We introduce ACT, a novel adversarial attack framework against NMT systems guided by a classifier.
In our attack, the adversary aims to craft meaning-preserving adversarial examples whose translations belong to a different class than the original translations.
To evaluate the robustness of NMT models to our attack, we propose enhancements to existing black-box word-replacement-based attacks.
arXiv Detail & Related papers (2023-08-29T12:12:53Z) - Isolation and Induction: Training Robust Deep Neural Networks against
Model Stealing Attacks [51.51023951695014]
Existing model stealing defenses add deceptive perturbations to the victim's posterior probabilities to mislead the attackers.
This paper proposes Isolation and Induction (InI), a novel and effective training framework for model stealing defenses.
In contrast to adding perturbations over model predictions that harm the benign accuracy, we train models to produce uninformative outputs against stealing queries.
arXiv Detail & Related papers (2023-08-02T05:54:01Z) - TransFool: An Adversarial Attack against Neural Machine Translation
Models [49.50163349643615]
We investigate the vulnerability of Neural Machine Translation (NMT) models to adversarial attacks and propose a new attack algorithm called TransFool.
We generate fluent adversarial examples in the source language that maintain a high level of semantic similarity with the clean samples.
Based on automatic and human evaluations, TransFool leads to improvement in terms of success rate, semantic similarity, and fluency compared to the existing attacks.
arXiv Detail & Related papers (2023-02-02T08:35:34Z) - Multi-granularity Textual Adversarial Attack with Behavior Cloning [4.727534308759158]
We propose MAYA, a Multi-grAnularitY Attack model to generate high-quality adversarial samples with fewer queries to victim models.
We conduct comprehensive experiments to evaluate our attack models by attacking BiLSTM, BERT and RoBERTa in two different black-box attack settings and three benchmark datasets.
arXiv Detail & Related papers (2021-09-09T15:46:45Z) - Training Meta-Surrogate Model for Transferable Adversarial Attack [98.13178217557193]
We consider adversarial attacks to a black-box model when no queries are allowed.
In this setting, many methods directly attack surrogate models and transfer the obtained adversarial examples to fool the target model.
We show we can obtain a Meta-Surrogate Model (MSM) such that attacks to this model can be easier transferred to other models.
arXiv Detail & Related papers (2021-09-05T03:27:46Z) - Masked Adversarial Generation for Neural Machine Translation [0.0]
We learn to attack a model by training an adversarial generator based on a language model.
Experiments show that it improves the robustness of machine translation models, while being faster than competing methods.
arXiv Detail & Related papers (2021-09-01T14:56:37Z) - Towards Variable-Length Textual Adversarial Attacks [68.27995111870712]
It is non-trivial to conduct textual adversarial attacks on natural language processing tasks due to the discreteness of data.
In this paper, we propose variable-length textual adversarial attacks(VL-Attack)
Our method can achieve $33.18$ BLEU score on IWSLT14 German-English translation, achieving an improvement of $1.47$ over the baseline model.
arXiv Detail & Related papers (2021-04-16T14:37:27Z) - Explain2Attack: Text Adversarial Attacks via Cross-Domain
Interpretability [18.92690624514601]
Research has shown that down-stream models can be easily fooled with adversarial inputs that look like the training data, but slightly perturbed, in a way imperceptible to humans.
In this paper, we propose Explain2Attack, a black-box adversarial attack on text classification task.
We show that our framework either achieves or out-performs attack rates of the state-of-the-art models, yet with lower queries cost and higher efficiency.
arXiv Detail & Related papers (2020-10-14T04:56:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.