Unlearning Trojans in Large Language Models: A Comparison Between Natural Language and Source Code
- URL: http://arxiv.org/abs/2408.12416v1
- Date: Thu, 22 Aug 2024 14:12:06 GMT
- Title: Unlearning Trojans in Large Language Models: A Comparison Between Natural Language and Source Code
- Authors: Mahdi Kazemi, Aftab Hussain, Md Rafiqul Islam Rabin, Mohammad Amin Alipour, Sen Lin,
- Abstract summary: This work investigates the application of Machine Unlearning (MU) for mitigating the impact of trojans embedded in large language models of natural language (Text-LLMs) and large language models of code (Code-LLMs)
- Score: 9.302681952761567
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work investigates the application of Machine Unlearning (MU) for mitigating the impact of trojans embedded in conventional large language models of natural language (Text-LLMs) and large language models of code (Code-LLMs) We propose a novel unlearning approach, LYA, that leverages both gradient ascent and elastic weight consolidation, a Fisher Information Matrix (FIM) based regularization technique, to unlearn trojans from poisoned models. We compare the effectiveness of LYA against conventional techniques like fine-tuning, retraining, and vanilla gradient ascent. The subject models we investigate are BERT and CodeBERT, for sentiment analysis and code defect detection tasks, respectively. Our findings demonstrate that the combination of gradient ascent and FIM-based regularization, as done in LYA, outperforms existing methods in removing the trojan's influence from the poisoned model, while preserving its original functionality. To the best of our knowledge, this is the first work that compares and contrasts MU of trojans in LLMs, in the NL and Coding domain.
Related papers
- Concept-ROT: Poisoning Concepts in Large Language Models with Model Editing [4.281984287488243]
We show that editing techniques can integrate more complex behaviors with similar effectiveness.
We develop Concept-ROT, a model editing-based method that efficiently inserts trojans which exhibit complex output behaviors.
Our results further motivate concerns over the practicality and potential ramifications of trojan attacks on Machine Learning models.
arXiv Detail & Related papers (2024-12-17T21:29:30Z) - Get Confused Cautiously: Textual Sequence Memorization Erasure with Selective Entropy Maximization [17.20276556057748]
Large Language Models (LLMs) have been found to memorize and recite some of the textual sequences from their training set verbatim.
This Textual Sequence Memorization (TSM) phenomenon leads to a high demand to regulate LLM output to prevent it from generating certain memorized text.
Existing methods for TSM erasure fail to forget massive memorized samples without substantially jeopardizing the model utility.
arXiv Detail & Related papers (2024-08-09T10:26:11Z) - Trojans in Large Language Models of Code: A Critical Review through a Trigger-Based Taxonomy [11.075592348442225]
Large language models (LLMs) have provided a lot of exciting new capabilities in software development.
The opaque nature of these models makes them difficult to reason about and inspect.
This work presents an overview of the current state-of-the-art trojan attacks on large language models of code.
arXiv Detail & Related papers (2024-05-05T06:43:52Z) - LM-Polygraph: Uncertainty Estimation for Language Models [71.21409522341482]
Uncertainty estimation (UE) methods are one path to safer, more responsible, and more effective use of large language models (LLMs)
We introduce LM-Polygraph, a framework with implementations of a battery of state-of-the-art UE methods for LLMs in text generation tasks, with unified program interfaces in Python.
It introduces an extendable benchmark for consistent evaluation of UE techniques by researchers, and a demo web application that enriches the standard chat dialog with confidence scores.
arXiv Detail & Related papers (2023-11-13T15:08:59Z) - CodeGen2: Lessons for Training LLMs on Programming and Natural Languages [116.74407069443895]
We unify encoder and decoder-based models into a single prefix-LM.
For learning methods, we explore the claim of a "free lunch" hypothesis.
For data distributions, the effect of a mixture distribution and multi-epoch training of programming and natural languages on model performance is explored.
arXiv Detail & Related papers (2023-05-03T17:55:25Z) - Combining Contrastive Learning and Knowledge Graph Embeddings to develop
medical word embeddings for the Italian language [0.0]
This paper attempts to improve available embeddings in the uncovered niche of the Italian medical domain.
The main objective is to improve the accuracy of semantic similarity between medical terms.
Since the Italian language lacks medical texts and controlled vocabularies, we have developed a specific solution.
arXiv Detail & Related papers (2022-11-09T17:12:28Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z) - A Comparative Study of Lexical Substitution Approaches based on Neural
Language Models [117.96628873753123]
We present a large-scale comparative study of popular neural language and masked language models.
We show that already competitive results achieved by SOTA LMs/MLMs can be further improved if information about the target word is injected properly.
arXiv Detail & Related papers (2020-05-29T18:43:22Z) - The TrojAI Software Framework: An OpenSource tool for Embedding Trojans
into Deep Learning Models [4.8986598953553555]
TrojAI is an open source set of Python tools capable of generating triggered (poisoned) datasets and associated deep learning models with trojans at scale.
We show that the nature of the trigger, training batch size, and dataset poisoning percentage all affect successful embedding of trojans.
We test Neural Cleanse against the trojaned MNIST models and successfully detect anomalies in the trained models approximately $18%$ of the time.
arXiv Detail & Related papers (2020-03-13T01:45:32Z) - UniLMv2: Pseudo-Masked Language Models for Unified Language Model
Pre-Training [152.63467944568094]
We propose to pre-train a unified language model for both autoencoding and partially autoregressive language modeling tasks.
Our experiments show that the unified language models pre-trained using PMLM achieve new state-of-the-art results on a wide range of natural language understanding and generation tasks.
arXiv Detail & Related papers (2020-02-28T15:28:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.