User Inference Attacks on Large Language Models
- URL: http://arxiv.org/abs/2310.09266v2
- Date: Fri, 23 Feb 2024 20:25:17 GMT
- Title: User Inference Attacks on Large Language Models
- Authors: Nikhil Kandpal, Krishna Pillutla, Alina Oprea, Peter Kairouz,
Christopher A. Choquette-Choo, Zheng Xu
- Abstract summary: Fine-tuning is a common and effective method for tailoring large language models (LLMs) to specialized tasks and applications.
We study the privacy implications of fine-tuning LLMs on user data.
- Score: 26.616016510555088
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fine-tuning is a common and effective method for tailoring large language
models (LLMs) to specialized tasks and applications. In this paper, we study
the privacy implications of fine-tuning LLMs on user data. To this end, we
consider a realistic threat model, called user inference, wherein an attacker
infers whether or not a user's data was used for fine-tuning. We design attacks
for performing user inference that require only black-box access to the
fine-tuned LLM and a few samples from a user which need not be from the
fine-tuning dataset. We find that LLMs are susceptible to user inference across
a variety of fine-tuning datasets, at times with near perfect attack success
rates. Further, we theoretically and empirically investigate the properties
that make users vulnerable to user inference, finding that outlier users, users
with identifiable shared features between examples, and users that contribute a
large fraction of the fine-tuning data are most susceptible to attack. Based on
these findings, we identify several methods for mitigating user inference
including training with example-level differential privacy, removing
within-user duplicate examples, and reducing a user's contribution to the
training data. While these techniques provide partial mitigation of user
inference, we highlight the need to develop methods to fully protect fine-tuned
LLMs against this privacy risk.
Related papers
- Simultaneous Unlearning of Multiple Protected User Attributes From Variational Autoencoder Recommenders Using Adversarial Training [8.272412404173954]
We present AdvXMultVAE which aims to unlearn multiple protected attributes simultaneously to improve fairness across demographic user groups.
Our experiments on two datasets, LFM-2b-100k and Ml-1m, show that our approach can yield better results than its singular removal counterparts.
arXiv Detail & Related papers (2024-10-28T12:36:00Z) - Prompt Tuning as User Inherent Profile Inference Machine [53.78398656789463]
We propose UserIP-Tuning, which uses prompt-tuning to infer user profiles.
A profile quantization codebook bridges the modality gap by profile embeddings into collaborative IDs.
Experiments on four public datasets show that UserIP-Tuning outperforms state-of-the-art recommendation algorithms.
arXiv Detail & Related papers (2024-08-13T02:25:46Z) - SeGA: Preference-Aware Self-Contrastive Learning with Prompts for
Anomalous User Detection on Twitter [14.483830120541894]
We propose SeGA, preference-aware self-contrastive learning for anomalous user detection.
SeGA uses large language models to summarize user preferences via posts.
We empirically validate the effectiveness of the model design and pre-training strategies.
arXiv Detail & Related papers (2023-12-17T05:35:28Z) - Recovering from Privacy-Preserving Masking with Large Language Models [14.828717714653779]
We use large language models (LLMs) to suggest substitutes of masked tokens.
We show that models trained on the obfuscation corpora are able to achieve comparable performance with the ones trained on the original data.
arXiv Detail & Related papers (2023-09-12T16:39:41Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Membership Inference Attacks Against Latent Factor Model [0.0]
We use the latent factor model as the recommender to get the list of recommended items.
A shadow recommender is established to derive the labeled training data for the attack model.
The experimental data show that the AUC index of our attack model can reach 0.857 on the real dataset MovieLens.
arXiv Detail & Related papers (2022-12-15T08:16:08Z) - Canary in a Coalmine: Better Membership Inference with Ensembled
Adversarial Queries [53.222218035435006]
We use adversarial tools to optimize for queries that are discriminative and diverse.
Our improvements achieve significantly more accurate membership inference than existing methods.
arXiv Detail & Related papers (2022-10-19T17:46:50Z) - The Minority Matters: A Diversity-Promoting Collaborative Metric
Learning Algorithm [154.47590401735323]
Collaborative Metric Learning (CML) has recently emerged as a popular method in recommendation systems.
This paper focuses on a challenging scenario where a user has multiple categories of interests.
We propose a novel method called textitDiversity-Promoting Collaborative Metric Learning (DPCML)
arXiv Detail & Related papers (2022-09-30T08:02:18Z) - FedCL: Federated Contrastive Learning for Privacy-Preserving
Recommendation [98.5705258907774]
FedCL can exploit high-quality negative samples for effective model training with privacy well protected.
We first infer user embeddings from local user data through the local model on each client, and then perturb them with local differential privacy (LDP)
Since individual user embedding contains heavy noise due to LDP, we propose to cluster user embeddings on the server to mitigate the influence of noise.
arXiv Detail & Related papers (2022-04-21T02:37:10Z) - Personalized Adaptive Meta Learning for Cold-start User Preference
Prediction [46.65783845757707]
A common challenge in personalized user preference prediction is the cold-start problem.
We propose a novel personalized adaptive meta learning approach to consider both the major and the minor users.
Our method outperforms the state-of-the-art methods dramatically for both the minor and major users.
arXiv Detail & Related papers (2020-12-22T05:48:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.