Masked Language Model Based Textual Adversarial Example Detection
- URL: http://arxiv.org/abs/2304.08767v3
- Date: Sun, 28 Jan 2024 13:58:07 GMT
- Title: Masked Language Model Based Textual Adversarial Example Detection
- Authors: Xiaomei Zhang, Zhaoxi Zhang, Qi Zhong, Xufei Zheng, Yanjun Zhang,
Shengshan Hu, Leo Yu Zhang
- Abstract summary: Adrial attacks are a serious threat to reliable deployment of machine learning models in safety-critical applications.
We propose a novel textual adversarial example detection method, namely Masked Model-based Detection (MLMD)
- Score: 14.734863175424797
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks are a serious threat to the reliable deployment of
machine learning models in safety-critical applications. They can misguide
current models to predict incorrectly by slightly modifying the inputs.
Recently, substantial work has shown that adversarial examples tend to deviate
from the underlying data manifold of normal examples, whereas pre-trained
masked language models can fit the manifold of normal NLP data. To explore how
to use the masked language model in adversarial detection, we propose a novel
textual adversarial example detection method, namely Masked Language
Model-based Detection (MLMD), which can produce clearly distinguishable signals
between normal examples and adversarial examples by exploring the changes in
manifolds induced by the masked language model. MLMD features a plug and play
usage (i.e., no need to retrain the victim model) for adversarial defense and
it is agnostic to classification tasks, victim model's architectures, and
to-be-defended attack methods. We evaluate MLMD on various benchmark textual
datasets, widely studied machine learning models, and state-of-the-art (SOTA)
adversarial attacks (in total $3*4*4 = 48$ settings). Experimental results show
that MLMD can achieve strong performance, with detection accuracy up to 0.984,
0.967, and 0.901 on AG-NEWS, IMDB, and SST-2 datasets, respectively.
Additionally, MLMD is superior, or at least comparable to, the SOTA detection
defenses in detection accuracy and F1 score. Among many defenses based on the
off-manifold assumption of adversarial examples, this work offers a new angle
for capturing the manifold change. The code for this work is openly accessible
at \url{https://github.com/mlmddetection/MLMDdetection}.
Related papers
- SHIELD : An Evaluation Benchmark for Face Spoofing and Forgery Detection
with Multimodal Large Language Models [63.946809247201905]
We introduce a new benchmark, namely SHIELD, to evaluate the ability of MLLMs on face spoofing and forgery detection.
We design true/false and multiple-choice questions to evaluate multimodal face data in these two face security tasks.
The results indicate that MLLMs hold substantial potential in the face security domain.
arXiv Detail & Related papers (2024-02-06T17:31:36Z) - DALA: A Distribution-Aware LoRA-Based Adversarial Attack against
Language Models [64.79319733514266]
Adversarial attacks can introduce subtle perturbations to input data.
Recent attack methods can achieve a relatively high attack success rate (ASR)
We propose a Distribution-Aware LoRA-based Adversarial Attack (DALA) method.
arXiv Detail & Related papers (2023-11-14T23:43:47Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Exploring Model Dynamics for Accumulative Poisoning Discovery [62.08553134316483]
We propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.
By implicitly transferring the changes in the data manipulation to that in the model outputs, Memorization Discrepancy can discover the imperceptible poison samples.
We thoroughly explore its properties and propose Discrepancy-aware Sample Correction (DSC) to defend against accumulative poisoning attacks.
arXiv Detail & Related papers (2023-06-06T14:45:24Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - EMShepherd: Detecting Adversarial Samples via Side-channel Leakage [6.868995628617191]
Adversarial attacks have disastrous consequences for deep learning-empowered critical applications.
We propose a framework, EMShepherd, to capture electromagnetic traces of model execution, perform processing on traces and exploit them for adversarial detection.
We demonstrate that our air-gapped EMShepherd can effectively detect different adversarial attacks on a commonly used FPGA deep learning accelerator.
arXiv Detail & Related papers (2023-03-27T19:38:55Z) - Towards Generating Adversarial Examples on Mixed-type Data [32.41305735919529]
We propose a novel attack algorithm M-Attack, which can effectively generate adversarial examples in mixed-type data.
Based on M-Attack, attackers can attempt to mislead the targeted classification model's prediction, by only slightly perturbing both the numerical and categorical features in the given data samples.
Our generated adversarial examples can evade potential detection models, which makes the attack indeed insidious.
arXiv Detail & Related papers (2022-10-17T20:17:21Z) - Instance Attack:An Explanation-based Vulnerability Analysis Framework
Against DNNs for Malware Detection [0.0]
We propose the notion of the instance-based attack.
Our scheme is interpretable and can work in a black-box environment.
Our method operates in black-box settings and the results can be validated with domain knowledge.
arXiv Detail & Related papers (2022-09-06T12:41:20Z) - Stateful Detection of Model Extraction Attacks [9.405458160620535]
We propose VarDetect, a stateful monitor that tracks the distribution of queries made by users of a service to detect model extraction attacks.
VarDetect robustly separates three types of attacker samples from benign samples, and successfully raises an alarm for each.
We demonstrate that even adaptive attackers with prior knowledge of the deployment of VarDetect, are detected by it.
arXiv Detail & Related papers (2021-07-12T02:18:26Z) - Stance Detection Benchmark: How Robust Is Your Stance Detection? [65.91772010586605]
Stance Detection (StD) aims to detect an author's stance towards a certain topic or claim.
We introduce a StD benchmark that learns from ten StD datasets of various domains in a multi-dataset learning setting.
Within this benchmark setup, we are able to present new state-of-the-art results on five of the datasets.
arXiv Detail & Related papers (2020-01-06T13:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.