Extracting Training Data from Large Language Models
- URL: http://arxiv.org/abs/2012.07805v1
- Date: Mon, 14 Dec 2020 18:39:09 GMT
- Title: Extracting Training Data from Large Language Models
- Authors: Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski,
Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar
Erlingsson, Alina Oprea, Colin Raffel
- Abstract summary: This paper demonstrates that an adversary can perform a training data extraction attack to recover individual training examples by querying the language model.
We demonstrate our attack on GPT-2, a language model trained on scrapes of the public Internet, and are able to extract hundreds of verbatim text sequences from the model's training data.
- Score: 78.3839333127544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It has become common to publish large (billion parameter) language models
that have been trained on private datasets. This paper demonstrates that in
such settings, an adversary can perform a training data extraction attack to
recover individual training examples by querying the language model.
We demonstrate our attack on GPT-2, a language model trained on scrapes of
the public Internet, and are able to extract hundreds of verbatim text
sequences from the model's training data. These extracted examples include
(public) personally identifiable information (names, phone numbers, and email
addresses), IRC conversations, code, and 128-bit UUIDs. Our attack is possible
even though each of the above sequences are included in just one document in
the training data.
We comprehensively evaluate our extraction attack to understand the factors
that contribute to its success. For example, we find that larger models are
more vulnerable than smaller models. We conclude by drawing lessons and
discussing possible safeguards for training large language models.
Related papers
- Special Characters Attack: Toward Scalable Training Data Extraction From Large Language Models [36.58320580210008]
We show that certain special characters or their combinations with English letters are stronger memory triggers, leading to more severe data leakage.
We propose a simple but effective Special Characters Attack (SCA) to induce training data leakage.
arXiv Detail & Related papers (2024-05-09T02:35:32Z) - Traces of Memorisation in Large Language Models for Code [16.125924759649106]
Large language models for code are commonly trained on large unsanitised corpora of source code scraped from the internet.
We compare the rate of memorisation with large language models trained on natural language.
We find that large language models for code are vulnerable to data extraction attacks, like their natural language counterparts.
arXiv Detail & Related papers (2023-12-18T19:12:58Z) - Scalable Extraction of Training Data from (Production) Language Models [93.7746567808049]
This paper studies extractable memorization: training data that an adversary can efficiently extract by querying a machine learning model without prior knowledge of the training dataset.
We show an adversary can extract gigabytes of training data from open-source language models like Pythia or GPT-Neo, semi-open models like LLaMA or Falcon, and closed models like ChatGPT.
arXiv Detail & Related papers (2023-11-28T18:47:03Z) - Assessing Privacy Risks in Language Models: A Case Study on
Summarization Tasks [65.21536453075275]
We focus on the summarization task and investigate the membership inference (MI) attack.
We exploit text similarity and the model's resistance to document modifications as potential MI signals.
We discuss several safeguards for training summarization models to protect against MI attacks and discuss the inherent trade-off between privacy and utility.
arXiv Detail & Related papers (2023-10-20T05:44:39Z) - Language Model Pre-Training with Sparse Latent Typing [66.75786739499604]
We propose a new pre-training objective, Sparse Latent Typing, which enables the model to sparsely extract sentence-level keywords with diverse latent types.
Experimental results show that our model is able to learn interpretable latent type categories in a self-supervised manner without using any external knowledge.
arXiv Detail & Related papers (2022-10-23T00:37:08Z) - Recovering Private Text in Federated Learning of Language Models [30.646865969760412]
Federated learning allows distributed users to collaboratively train a model while keeping each user's data private.
We present a novel attack method FILM for federated learning of language models.
We show the feasibility of recovering text from large batch sizes of up to 128 sentences.
arXiv Detail & Related papers (2022-05-17T17:38:37Z) - Training Data Leakage Analysis in Language Models [6.843491191969066]
We introduce a methodology that investigates identifying the user content in the training data that could be leaked under a strong and realistic threat model.
We propose two metrics to quantify user-level data leakage by measuring a model's ability to produce unique sentence fragments within training data.
arXiv Detail & Related papers (2021-01-14T00:57:32Z) - Unsupervised Paraphrasing with Pretrained Language Models [85.03373221588707]
We propose a training pipeline that enables pre-trained language models to generate high-quality paraphrases in an unsupervised setting.
Our recipe consists of task-adaptation, self-supervision, and a novel decoding algorithm named Dynamic Blocking.
We show with automatic and human evaluations that our approach achieves state-of-the-art performance on both the Quora Question Pair and the ParaNMT datasets.
arXiv Detail & Related papers (2020-10-24T11:55:28Z) - Comparison of Interactive Knowledge Base Spelling Correction Models for
Low-Resource Languages [81.90356787324481]
Spelling normalization for low resource languages is a challenging task because the patterns are hard to predict.
This work shows a comparison of a neural model and character language models with varying amounts on target language data.
Our usage scenario is interactive correction with nearly zero amounts of training examples, improving models as more data is collected.
arXiv Detail & Related papers (2020-10-20T17:31:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.