Trojaning Language Models for Fun and Profit
- URL: http://arxiv.org/abs/2008.00312v2
- Date: Wed, 10 Mar 2021 21:52:58 GMT
- Title: Trojaning Language Models for Fun and Profit
- Authors: Xinyang Zhang, Zheng Zhang, Shouling Ji and Ting Wang
- Abstract summary: TROJAN-LM is a new class of trojaning attacks in which maliciously crafted LMs trigger host NLP systems to malfunction.
By empirically studying three state-of-the-art LMs in a range of security-critical NLP tasks, we demonstrate that TROJAN-LM possesses the following properties.
- Score: 53.45727748224679
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have witnessed the emergence of a new paradigm of building
natural language processing (NLP) systems: general-purpose, pre-trained
language models (LMs) are composed with simple downstream models and fine-tuned
for a variety of NLP tasks. This paradigm shift significantly simplifies the
system development cycles. However, as many LMs are provided by untrusted third
parties, their lack of standardization or regulation entails profound security
implications, which are largely unexplored.
To bridge this gap, this work studies the security threats posed by malicious
LMs to NLP systems. Specifically, we present TROJAN-LM, a new class of
trojaning attacks in which maliciously crafted LMs trigger host NLP systems to
malfunction in a highly predictable manner. By empirically studying three
state-of-the-art LMs (BERT, GPT-2, XLNet) in a range of security-critical NLP
tasks (toxic comment detection, question answering, text completion) as well as
user studies on crowdsourcing platforms, we demonstrate that TROJAN-LM
possesses the following properties: (i) flexibility - the adversary is able to
flexibly dene logical combinations (e.g., 'and', 'or', 'xor') of arbitrary
words as triggers, (ii) efficacy - the host systems misbehave as desired by the
adversary with high probability when trigger-embedded inputs are present, (iii)
specificity - the trojan LMs function indistinguishably from their benign
counterparts on clean inputs, and (iv) fluency - the trigger-embedded inputs
appear as fluent natural language and highly relevant to their surrounding
contexts. We provide analytical justification for the practicality of
TROJAN-LM, and further discuss potential countermeasures and their challenges,
which lead to several promising research directions.
Related papers
- garak: A Framework for Security Probing Large Language Models [16.305837349514505]
garak is a framework which can be used to discover and identify vulnerabilities in a target Large Language Models (LLMs)
The outputs of the framework describe a target model's weaknesses, contribute to an informed discussion of what composes vulnerabilities in unique contexts.
arXiv Detail & Related papers (2024-06-16T18:18:43Z) - Exploring Backdoor Attacks against Large Language Model-based Decision Making [27.316115171846953]
Large Language Models (LLMs) have shown significant promise in decision-making tasks when fine-tuned on specific applications.
These systems are exposed to substantial safety and security risks during the fine-tuning phase.
We propose the first comprehensive framework for Backdoor Attacks against LLM-enabled Decision-making systems.
arXiv Detail & Related papers (2024-05-27T17:59:43Z) - CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion [117.178835165855]
This paper introduces CodeAttack, a framework that transforms natural language inputs into code inputs.
Our studies reveal a new and universal safety vulnerability of these models against code input.
We find that a larger distribution gap between CodeAttack and natural language leads to weaker safety generalization.
arXiv Detail & Related papers (2024-03-12T17:55:38Z) - The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative [55.08395463562242]
Multimodal Large Language Models (MLLMs) are constantly defining the new boundary of Artificial General Intelligence (AGI)
Our paper explores a novel vulnerability in MLLM societies - the indirect propagation of malicious content.
arXiv Detail & Related papers (2024-02-20T23:08:21Z) - Stealthy Attack on Large Language Model based Recommendation [24.51398285321322]
Large language models (LLMs) have been instrumental in propelling the progress of recommender systems (RS)
In this work, we reveal that the introduction of LLMs into recommendation models presents new security vulnerabilities due to their emphasis on the textual content of items.
We demonstrate that attackers can significantly boost an item's exposure by merely altering its textual content during the testing phase.
arXiv Detail & Related papers (2024-02-18T16:51:02Z) - LM-Polygraph: Uncertainty Estimation for Language Models [71.21409522341482]
Uncertainty estimation (UE) methods are one path to safer, more responsible, and more effective use of large language models (LLMs)
We introduce LM-Polygraph, a framework with implementations of a battery of state-of-the-art UE methods for LLMs in text generation tasks, with unified program interfaces in Python.
It introduces an extendable benchmark for consistent evaluation of UE techniques by researchers, and a demo web application that enriches the standard chat dialog with confidence scores.
arXiv Detail & Related papers (2023-11-13T15:08:59Z) - PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models [11.693095252994482]
We present POISONPROMPT, a novel backdoor attack capable of successfully compromising both hard and soft prompt-based LLMs.
Our findings highlight the potential security threats posed by backdoor attacks on prompt-based LLMs and emphasize the need for further research in this area.
arXiv Detail & Related papers (2023-10-19T03:25:28Z) - Survey of Vulnerabilities in Large Language Models Revealed by
Adversarial Attacks [5.860289498416911]
Large Language Models (LLMs) are swiftly advancing in architecture and capability.
As they integrate more deeply into complex systems, the urgency to scrutinize their security properties grows.
This paper surveys research in the emerging interdisciplinary field of adversarial attacks on LLMs.
arXiv Detail & Related papers (2023-10-16T21:37:24Z) - Let Models Speak Ciphers: Multiagent Debate through Embeddings [84.20336971784495]
We introduce CIPHER (Communicative Inter-Model Protocol Through Embedding Representation) to address this issue.
By deviating from natural language, CIPHER offers an advantage of encoding a broader spectrum of information without any modification to the model weights.
This showcases the superiority and robustness of embeddings as an alternative "language" for communication among LLMs.
arXiv Detail & Related papers (2023-10-10T03:06:38Z) - Language models are not naysayers: An analysis of language models on
negation benchmarks [58.32362243122714]
We evaluate the ability of current-generation auto-regressive language models to handle negation.
We show that LLMs have several limitations including insensitivity to the presence of negation, an inability to capture the lexical semantics of negation, and a failure to reason under negation.
arXiv Detail & Related papers (2023-06-14T01:16:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.