Targeted Attack on GPT-Neo for the SATML Language Model Data Extraction
Challenge
- URL: http://arxiv.org/abs/2302.07735v1
- Date: Mon, 13 Feb 2023 18:00:44 GMT
- Title: Targeted Attack on GPT-Neo for the SATML Language Model Data Extraction
Challenge
- Authors: Ali Al-Kaswan, Maliheh Izadi, Arie van Deursen
- Abstract summary: We apply a targeted data extraction attack to the SATML2023 Language Model Training Data Extraction Challenge.
We maximise the recall of the model and are able to extract the suffix for 69% of the samples.
Our approach reaches a score of 0.405 recall at a 10% false positive rate, which is an improvement of 34% over the baseline of 0.301.
- Score: 4.438873396405334
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Previous work has shown that Large Language Models are susceptible to
so-called data extraction attacks. This allows an attacker to extract a sample
that was contained in the training data, which has massive privacy
implications. The construction of data extraction attacks is challenging,
current attacks are quite inefficient, and there exists a significant gap in
the extraction capabilities of untargeted attacks and memorization. Thus,
targeted attacks are proposed, which identify if a given sample from the
training data, is extractable from a model. In this work, we apply a targeted
data extraction attack to the SATML2023 Language Model Training Data Extraction
Challenge. We apply a two-step approach. In the first step, we maximise the
recall of the model and are able to extract the suffix for 69% of the samples.
In the second step, we use a classifier-based Membership Inference Attack on
the generations. Our AutoSklearn classifier achieves a precision of 0.841. The
full approach reaches a score of 0.405 recall at a 10% false positive rate,
which is an improvement of 34% over the baseline of 0.301.
Related papers
- Fine-tuned Large Language Models (LLMs): Improved Prompt Injection Attacks Detection [6.269725911814401]
Large language models (LLMs) are becoming a popular tool as they have significantly advanced in their capability to tackle a wide range of language-based tasks.
However, LLMs applications are highly vulnerable to prompt injection attacks, which poses a critical problem.
This project explores the security vulnerabilities in relation to prompt injection attacks.
arXiv Detail & Related papers (2024-10-28T00:36:21Z) - Alpaca against Vicuna: Using LLMs to Uncover Memorization of LLMs [61.04246774006429]
We introduce a black-box prompt optimization method that uses an attacker LLM agent to uncover higher levels of memorization in a victim agent.
We observe that our instruction-based prompts generate outputs with 23.7% higher overlap with training data compared to the baseline prefix-suffix measurements.
Our findings show that instruction-tuned models can expose pre-training data as much as their base-models, if not more so, and using instructions proposed by other LLMs can open a new avenue of automated attacks.
arXiv Detail & Related papers (2024-03-05T19:32:01Z) - Universal Vulnerabilities in Large Language Models: Backdoor Attacks for In-context Learning [14.011140902511135]
In-context learning, a paradigm bridging the gap between pre-training and fine-tuning, has demonstrated high efficacy in several NLP tasks.
Despite being widely applied, in-context learning is vulnerable to malicious attacks.
We design a new backdoor attack method, named ICLAttack, to target large language models based on in-context learning.
arXiv Detail & Related papers (2024-01-11T14:38:19Z) - Scalable Extraction of Training Data from (Production) Language Models [93.7746567808049]
This paper studies extractable memorization: training data that an adversary can efficiently extract by querying a machine learning model without prior knowledge of the training dataset.
We show an adversary can extract gigabytes of training data from open-source language models like Pythia or GPT-Neo, semi-open models like LLaMA or Falcon, and closed models like ChatGPT.
arXiv Detail & Related papers (2023-11-28T18:47:03Z) - Fault Injection and Safe-Error Attack for Extraction of Embedded Neural Network Models [1.2499537119440245]
We focus on embedded deep neural network models on 32-bit microcontrollers in the Internet of Things (IoT)
We propose a black-box approach to craft a successful attack set.
For a classical convolutional neural network, we successfully recover at least 90% of the most significant bits with about 1500 crafted inputs.
arXiv Detail & Related papers (2023-08-31T13:09:33Z) - Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft
Prompting and Calibrated Confidence Estimation [56.57532238195446]
We propose a method named Ethicist for targeted training data extraction.
To elicit memorization, we tune soft prompt embeddings while keeping the model fixed.
We show that Ethicist significantly improves the extraction performance on a recently proposed public benchmark.
arXiv Detail & Related papers (2023-07-10T08:03:41Z) - DAD: Data-free Adversarial Defense at Test Time [21.741026088202126]
Deep models are highly susceptible to adversarial attacks.
Privacy has become an important concern, restricting access to only trained models but not the training data.
We propose a completely novel problem of 'test-time adversarial defense in absence of training data and even their statistics'
arXiv Detail & Related papers (2022-04-04T15:16:13Z) - First to Possess His Statistics: Data-Free Model Extraction Attack on
Tabular Data [0.0]
This paper presents a novel model extraction attack, named TEMPEST, under a practical data-free setting.
Experiments show that our attack can achieve the same level of performance as the previous attacks.
We discuss a possibility whereby TEMPEST is executed in the real world through an experiment with a medical diagnosis.
arXiv Detail & Related papers (2021-09-30T05:30:12Z) - Learning to Attack: Towards Textual Adversarial Attacking in Real-world
Situations [81.82518920087175]
Adversarial attacking aims to fool deep neural networks with adversarial examples.
We propose a reinforcement learning based attack model, which can learn from attack history and launch attacks more efficiently.
arXiv Detail & Related papers (2020-09-19T09:12:24Z) - How Does Data Augmentation Affect Privacy in Machine Learning? [94.52721115660626]
We propose new MI attacks to utilize the information of augmented data.
We establish the optimal membership inference when the model is trained with augmented data.
arXiv Detail & Related papers (2020-07-21T02:21:10Z) - Uncertainty-aware Self-training for Text Classification with Few Labels [54.13279574908808]
We study self-training as one of the earliest semi-supervised learning approaches to reduce the annotation bottleneck.
We propose an approach to improve self-training by incorporating uncertainty estimates of the underlying neural network.
We show our methods leveraging only 20-30 labeled samples per class for each task for training and for validation can perform within 3% of fully supervised pre-trained language models.
arXiv Detail & Related papers (2020-06-27T08:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.