Bag of Tricks for Training Data Extraction from Language Models
- URL: http://arxiv.org/abs/2302.04460v2
- Date: Thu, 1 Jun 2023 10:14:55 GMT
- Title: Bag of Tricks for Training Data Extraction from Language Models
- Authors: Weichen Yu, Tianyu Pang, Qian Liu, Chao Du, Bingyi Kang, Yan Huang,
Min Lin, Shuicheng Yan
- Abstract summary: We investigate and benchmark tricks for improving training data extraction using a publicly available dataset.
The experimental results show that several previously overlooked tricks can be crucial to the success of training data extraction.
- Score: 98.40637430115204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the advance of language models, privacy protection is receiving more
attention. Training data extraction is therefore of great importance, as it can
serve as a potential tool to assess privacy leakage. However, due to the
difficulty of this task, most of the existing methods are proof-of-concept and
still not effective enough. In this paper, we investigate and benchmark tricks
for improving training data extraction using a publicly available dataset.
Because most existing extraction methods use a pipeline of
generating-then-ranking, i.e., generating text candidates as potential training
data and then ranking them based on specific criteria, our research focuses on
the tricks for both text generation (e.g., sampling strategy) and text ranking
(e.g., token-level criteria). The experimental results show that several
previously overlooked tricks can be crucial to the success of training data
extraction. Based on the GPT-Neo 1.3B evaluation results, our proposed tricks
outperform the baseline by a large margin in most cases, providing a much
stronger baseline for future research. The code is available at
https://github.com/weichen-yu/LM-Extraction.
Related papers
- Leveraging Skills from Unlabeled Prior Data for Efficient Online Exploration [54.8229698058649]
We study how unlabeled prior trajectory data can be leveraged to learn efficient exploration strategies.
Our method SUPE (Skills from Unlabeled Prior data for Exploration) demonstrates that a careful combination of these ideas compounds their benefits.
We empirically show that SUPE reliably outperforms prior strategies, successfully solving a suite of long-horizon, sparse-reward tasks.
arXiv Detail & Related papers (2024-10-23T17:58:45Z) - Training on the Benchmark Is Not All You Need [52.01920740114261]
We propose a simple and effective data leakage detection method based on the contents of multiple-choice options.
Our method is able to work under black-box conditions without access to model training data or weights.
We evaluate the degree of data leakage of 31 mainstream open-source LLMs on four benchmark datasets.
arXiv Detail & Related papers (2024-09-03T11:09:44Z) - Adaptive Pre-training Data Detection for Large Language Models via Surprising Tokens [1.2549198550400134]
Large language models (LLMs) are extensively used, but there are concerns regarding privacy, security, and copyright due to their opaque training data.
Current solutions to this problem leverage techniques explored in machine learning privacy such as Membership Inference Attacks (MIAs)
We propose an adaptive pre-training data detection method which alleviates this reliance and effectively amplify the identification.
arXiv Detail & Related papers (2024-07-30T23:43:59Z) - Revisit Few-shot Intent Classification with PLMs: Direct Fine-tuning vs. Continual Pre-training [20.98770732015944]
Few-shot intent detection involves training a deep learning model to classify utterances based on their underlying intents using only a small amount of labeled data.
We show that continual pre-training may not be essential, since the overfitting problem of PLMs on this task may not be as serious as expected.
To maximize the utilization of the limited available data, we propose a context augmentation method and leverage sequential self-distillation to boost performance.
arXiv Detail & Related papers (2023-06-08T15:26:52Z) - Privacy Leakage in Text Classification: A Data Extraction Approach [9.045332526072828]
We study the potential privacy leakage in the text classification domain by investigating the problem of unintended memorization of training data.
We propose an algorithm to extract missing tokens of a partial text by exploiting the likelihood of the class label provided by the model.
arXiv Detail & Related papers (2022-06-09T16:14:26Z) - Training Data is More Valuable than You Think: A Simple and Effective
Method by Retrieving from Training Data [82.92758444543689]
Retrieval-based methods have been shown to be effective in NLP tasks via introducing external knowledge.
Surprisingly, we found that REtrieving from the traINing datA (REINA) only can lead to significant gains on multiple NLG and NLU tasks.
Experimental results show that this simple method can achieve significantly better performance on a variety of NLU and NLG tasks.
arXiv Detail & Related papers (2022-03-16T17:37:27Z) - Self-training Improves Pre-training for Natural Language Understanding [63.78927366363178]
We study self-training as another way to leverage unlabeled data through semi-supervised learning.
We introduce SentAugment, a data augmentation method which computes task-specific query embeddings from labeled data.
Our approach leads to scalable and effective self-training with improvements of up to 2.6% on standard text classification benchmarks.
arXiv Detail & Related papers (2020-10-05T17:52:25Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.