Evaluating LLM-based Personal Information Extraction and Countermeasures
- URL: http://arxiv.org/abs/2408.07291v3
- Date: Fri, 31 Jan 2025 05:16:50 GMT
- Title: Evaluating LLM-based Personal Information Extraction and Countermeasures
- Authors: Yupei Liu, Yuqi Jia, Jinyuan Jia, Neil Zhenqiang Gong,
- Abstract summary: Large language model (LLM) based personal information extraction can be benchmarked.<n>LLM can be misused by attackers to accurately extract various personal information from personal profiles.<n> prompt injection can defend against strong LLM-based attacks, reducing the attack to less effective traditional ones.
- Score: 63.91918057570824
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatically extracting personal information -- such as name, phone number, and email address -- from publicly available profiles at a large scale is a stepstone to many other security attacks including spear phishing. Traditional methods -- such as regular expression, keyword search, and entity detection -- achieve limited success at such personal information extraction. In this work, we perform a systematic measurement study to benchmark large language model (LLM) based personal information extraction and countermeasures. Towards this goal, we present a framework for LLM-based extraction attacks; collect four datasets including a synthetic dataset generated by GPT-4 and three real-world datasets with manually labeled eight categories of personal information; introduce a novel mitigation strategy based on prompt injection; and systematically benchmark LLM-based attacks and countermeasures using ten LLMs and five datasets. Our key findings include: LLM can be misused by attackers to accurately extract various personal information from personal profiles; LLM outperforms traditional methods; and prompt injection can defend against strong LLM-based attacks, reducing the attack to less effective traditional ones.
Related papers
- Model Inversion in Split Learning for Personalized LLMs: New Insights from Information Bottleneck Theory [11.83473842859642]
This work is the first to identify model inversion attacks in the split learning framework for personalized LLMs.
We propose a two-stage attack system in which the first part projects representations into the embedding space, and the second part uses a generative model to recover text from these embeddings.
arXiv Detail & Related papers (2025-01-10T13:47:13Z) - MemHunter: Automated and Verifiable Memorization Detection at Dataset-scale in LLMs [28.593941036010417]
We introduce MemHunter, which trains a memory-inducing LLM and employs hypothesis testing to efficiently detect memorization at the dataset level.
MemHunter is the first method capable of dataset-level memorization detection, providing a critical tool for assessing privacy risks in large-scale datasets.
arXiv Detail & Related papers (2024-12-10T07:42:46Z) - Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration [90.41908331897639]
Large language models (LLMs) have significantly benefited from training on diverse, high-quality task-specific data.
We present a novel approach, ReverseGen, designed to automatically generate effective training samples.
arXiv Detail & Related papers (2024-10-22T06:43:28Z) - Extracting Memorized Training Data via Decomposition [24.198975804570072]
We demonstrate a simple, query-based decompositional method to extract news articles from two frontier Large Language Models.
We extract at least one sentence from 73 articles, and over 20% of verbatim sentences from 6 articles.
If replicable at scale, this training data extraction methodology could expose new LLM security and safety vulnerabilities.
arXiv Detail & Related papers (2024-09-18T23:59:32Z) - LLM-PBE: Assessing Data Privacy in Large Language Models [111.58198436835036]
Large Language Models (LLMs) have become integral to numerous domains, significantly advancing applications in data management, mining, and analysis.
Despite the critical nature of this issue, there has been no existing literature to offer a comprehensive assessment of data privacy risks in LLMs.
Our paper introduces LLM-PBE, a toolkit crafted specifically for the systematic evaluation of data privacy risks in LLMs.
arXiv Detail & Related papers (2024-08-23T01:37:29Z) - Robust Utility-Preserving Text Anonymization Based on Large Language Models [80.5266278002083]
Text anonymization is crucial for sharing sensitive data while maintaining privacy.
Existing techniques face the emerging challenges of re-identification attack ability of Large Language Models.
This paper proposes a framework composed of three LLM-based components -- a privacy evaluator, a utility evaluator, and an optimization component.
arXiv Detail & Related papers (2024-07-16T14:28:56Z) - Are you still on track!? Catching LLM Task Drift with Activations [55.75645403965326]
Task drift allows attackers to exfiltrate data or influence the LLM's output for other users.
We show that a simple linear classifier can detect drift with near-perfect ROC AUC on an out-of-distribution test set.
We observe that this approach generalizes surprisingly well to unseen task domains, such as prompt injections, jailbreaks, and malicious instructions.
arXiv Detail & Related papers (2024-06-02T16:53:21Z) - Sampling-based Pseudo-Likelihood for Membership Inference Attacks [36.62066767969338]
Membership Inference Attacks (MIAs) determine whether a given text is included in the model's training data.
We propose a sampling-based Pseudo-Likelihood (textbfSPL) method for MIA that calculates SPL using only the text generated by an LLM to detect leaks.
arXiv Detail & Related papers (2024-04-17T11:12:59Z) - Unveiling the Misuse Potential of Base Large Language Models via In-Context Learning [61.2224355547598]
Open-sourcing of large language models (LLMs) accelerates application development, innovation, and scientific progress.
Our investigation exposes a critical oversight in this belief.
By deploying carefully designed demonstrations, our research demonstrates that base LLMs could effectively interpret and execute malicious instructions.
arXiv Detail & Related papers (2024-04-16T13:22:54Z) - Alpaca against Vicuna: Using LLMs to Uncover Memorization of LLMs [61.04246774006429]
We introduce a black-box prompt optimization method that uses an attacker LLM agent to uncover higher levels of memorization in a victim agent.
We observe that our instruction-based prompts generate outputs with 23.7% higher overlap with training data compared to the baseline prefix-suffix measurements.
Our findings show that instruction-tuned models can expose pre-training data as much as their base-models, if not more so, and using instructions proposed by other LLMs can open a new avenue of automated attacks.
arXiv Detail & Related papers (2024-03-05T19:32:01Z) - Recovering from Privacy-Preserving Masking with Large Language Models [14.828717714653779]
We use large language models (LLMs) to suggest substitutes of masked tokens.
We show that models trained on the obfuscation corpora are able to achieve comparable performance with the ones trained on the original data.
arXiv Detail & Related papers (2023-09-12T16:39:41Z) - Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs [59.596335292426105]
This paper collects the first open-source dataset to evaluate safeguards in large language models.
We train several BERT-like classifiers to achieve results comparable with GPT-4 on automatic safety evaluation.
arXiv Detail & Related papers (2023-08-25T14:02:12Z) - On Extracting Specialized Code Abilities from Large Language Models: A
Feasibility Study [22.265542509143756]
We investigate the feasibility of launching imitation attacks on large language models (LLMs)
We show that attackers can train a medium-sized backbone model to replicate specialized code behaviors similar to the target LLMs.
arXiv Detail & Related papers (2023-03-06T10:34:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.