Mitigating Approximate Memorization in Language Models via Dissimilarity
Learned Policy
- URL: http://arxiv.org/abs/2305.01550v1
- Date: Tue, 2 May 2023 15:53:28 GMT
- Title: Mitigating Approximate Memorization in Language Models via Dissimilarity
Learned Policy
- Authors: Aly M. Kassem
- Abstract summary: Large Language models (LLMs) are trained on large amounts of data.
LLMs showed to memorize parts of the training data and emit those data verbatim when an adversary prompts appropriately.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language models (LLMs) are trained on large amounts of data, which can
include sensitive information that may compromise personal privacy. LLMs showed
to memorize parts of the training data and emit those data verbatim when an
adversary prompts appropriately. Previous research has primarily focused on
data preprocessing and differential privacy techniques to address memorization
or prevent verbatim memorization exclusively, which can give a false sense of
privacy. However, these methods rely on explicit and implicit assumptions about
the structure of the data to be protected, which often results in an incomplete
solution to the problem. To address this, we propose a novel framework that
utilizes a reinforcement learning approach (PPO) to fine-tune LLMs to mitigate
approximate memorization. Our approach utilizes a negative similarity score,
such as BERTScore or SacreBLEU, as a reward signal to learn a dissimilarity
policy. Our results demonstrate that this framework effectively mitigates
approximate memorization while maintaining high levels of coherence and fluency
in the generated samples. Furthermore, our framework is robust in mitigating
approximate memorization across various circumstances, including longer
context, which is known to increase memorization in LLMs.
Related papers
- Undesirable Memorization in Large Language Models: A Survey [5.659933808910005]
We present a Systematization of Knowledge (SoK) on the topic of memorization in Large Language Models (LLMs)
Memorization is the effect that a model tends to store and reproduce phrases or passages from the training data.
We discuss the metrics and methods used to measure memorization, followed by an analysis of the factors that contribute to memorization phenomenon.
arXiv Detail & Related papers (2024-10-03T16:34:46Z) - Unlocking Memorization in Large Language Models with Dynamic Soft Prompting [66.54460367290146]
Large language models (LLMs) have revolutionized natural language processing (NLP) tasks such as summarization, question answering, and translation.
LLMs pose significant security risks due to their tendency to memorize training data, leading to potential privacy breaches and copyright infringement.
We propose a novel method for estimating LLM memorization using dynamic, prefix-dependent soft prompts.
arXiv Detail & Related papers (2024-09-20T18:56:32Z) - Rethinking LLM Memorization through the Lens of Adversarial Compression [93.13830893086681]
Large language models (LLMs) trained on web-scale datasets raise substantial concerns regarding permissible data usage.
One major question is whether these models "memorize" all their training data or they integrate many data sources in some way more akin to how a human would learn and synthesize information.
We propose the Adversarial Compression Ratio (ACR) as a metric for assessing memorization in LLMs.
arXiv Detail & Related papers (2024-04-23T15:49:37Z) - SoK: Memorization in General-Purpose Large Language Models [25.448127387943053]
Large Language Models (LLMs) are advancing at a remarkable pace, with myriad applications under development.
LLMs can memorize short secrets in the training data, but can also memorize concepts like facts or writing styles that can be expressed in text in many different ways.
We propose a taxonomy for memorization in LLMs that covers verbatim text, facts, ideas and algorithms, writing styles, distributional properties, and alignment goals.
arXiv Detail & Related papers (2023-10-24T14:25:53Z) - Exploring Memorization in Fine-tuned Language Models [53.52403444655213]
We conduct the first comprehensive analysis to explore language models' memorization during fine-tuning across tasks.
Our studies with open-sourced and our own fine-tuned LMs across various tasks indicate that memorization presents a strong disparity among different fine-tuning tasks.
We provide an intuitive explanation of this task disparity via sparse coding theory and unveil a strong correlation between memorization and attention score distribution.
arXiv Detail & Related papers (2023-10-10T15:41:26Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - Quantifying and Analyzing Entity-level Memorization in Large Language
Models [4.59914731734176]
Large language models (LLMs) have been proven capable of memorizing their training data.
Privacy risks arising from memorization have attracted increasing attention.
We propose a fine-grained, entity-level definition to quantify memorization with conditions and metrics closer to real-world scenarios.
arXiv Detail & Related papers (2023-08-30T03:06:47Z) - Preventing Verbatim Memorization in Language Models Gives a False Sense
of Privacy [91.98116450958331]
We argue that verbatim memorization definitions are too restrictive and fail to capture more subtle forms of memorization.
Specifically, we design and implement an efficient defense that perfectly prevents all verbatim memorization.
We conclude by discussing potential alternative definitions and why defining memorization is a difficult yet crucial open question for neural language models.
arXiv Detail & Related papers (2022-10-31T17:57:55Z) - Knowledge Unlearning for Mitigating Privacy Risks in Language Models [31.322818016245087]
We propose knowledge unlearning as an alternative method to reduce privacy risks for language models.
We show that simply applying the unlikelihood training objective to target token sequences is effective at forgetting them.
We show that unlearning can give a stronger empirical privacy guarantee in scenarios where the data vulnerable to extraction attacks are known a priori.
arXiv Detail & Related papers (2022-10-04T10:18:11Z) - Counterfactual Memorization in Neural Language Models [91.8747020391287]
Modern neural language models that are widely used in various NLP tasks risk memorizing sensitive information from their training data.
An open question in previous studies of language model memorization is how to filter out "common" memorization.
We formulate a notion of counterfactual memorization which characterizes how a model's predictions change if a particular document is omitted during training.
arXiv Detail & Related papers (2021-12-24T04:20:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.