Counterfactual Memorization in Neural Language Models
- URL: http://arxiv.org/abs/2112.12938v2
- Date: Fri, 13 Oct 2023 22:16:41 GMT
- Title: Counterfactual Memorization in Neural Language Models
- Authors: Chiyuan Zhang, Daphne Ippolito, Katherine Lee, Matthew Jagielski,
Florian Tram\`er, Nicholas Carlini
- Abstract summary: Modern neural language models that are widely used in various NLP tasks risk memorizing sensitive information from their training data.
An open question in previous studies of language model memorization is how to filter out "common" memorization.
We formulate a notion of counterfactual memorization which characterizes how a model's predictions change if a particular document is omitted during training.
- Score: 91.8747020391287
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern neural language models that are widely used in various NLP tasks risk
memorizing sensitive information from their training data. Understanding this
memorization is important in real world applications and also from a
learning-theoretical perspective. An open question in previous studies of
language model memorization is how to filter out "common" memorization. In
fact, most memorization criteria strongly correlate with the number of
occurrences in the training set, capturing memorized familiar phrases, public
knowledge, templated texts, or other repeated data. We formulate a notion of
counterfactual memorization which characterizes how a model's predictions
change if a particular document is omitted during training. We identify and
study counterfactually-memorized training examples in standard text datasets.
We estimate the influence of each memorized training example on the validation
set and on generated texts, showing how this can provide direct evidence of the
source of memorization at test time.
Related papers
- Demystifying Verbatim Memorization in Large Language Models [67.49068128909349]
Large Language Models (LLMs) frequently memorize long sequences verbatim, often with serious legal and privacy implications.
We develop a framework to study verbatim memorization in a controlled setting by continuing pre-training from Pythia checkpoints with injected sequences.
We find that (1) non-trivial amounts of repetition are necessary for verbatim memorization to happen; (2) later (and presumably better) checkpoints are more likely to memorize verbatim sequences, even for out-of-distribution sequences.
arXiv Detail & Related papers (2024-07-25T07:10:31Z) - ROME: Memorization Insights from Text, Logits and Representation [17.458840481902644]
This paper proposes an innovative approach named ROME that bypasses direct processing of the training data.
Specifically, we select datasets categorized into three distinct types -- context-independent, conventional, and factual.
Our analysis then focuses on disparities between memorized and non-memorized samples by examining the logits and representations of generated texts.
arXiv Detail & Related papers (2024-03-01T13:15:30Z) - Exploring Memorization in Fine-tuned Language Models [53.52403444655213]
We conduct the first comprehensive analysis to explore language models' memorization during fine-tuning across tasks.
Our studies with open-sourced and our own fine-tuned LMs across various tasks indicate that memorization presents a strong disparity among different fine-tuning tasks.
We provide an intuitive explanation of this task disparity via sparse coding theory and unveil a strong correlation between memorization and attention score distribution.
arXiv Detail & Related papers (2023-10-10T15:41:26Z) - Quantifying and Analyzing Entity-level Memorization in Large Language
Models [4.59914731734176]
Large language models (LLMs) have been proven capable of memorizing their training data.
Privacy risks arising from memorization have attracted increasing attention.
We propose a fine-grained, entity-level definition to quantify memorization with conditions and metrics closer to real-world scenarios.
arXiv Detail & Related papers (2023-08-30T03:06:47Z) - Measures of Information Reflect Memorization Patterns [53.71420125627608]
We show that the diversity in the activation patterns of different neurons is reflective of model generalization and memorization.
Importantly, we discover that information organization points to the two forms of memorization, even for neural activations computed on unlabelled in-distribution examples.
arXiv Detail & Related papers (2022-10-17T20:15:24Z) - Memorization Without Overfitting: Analyzing the Training Dynamics of
Large Language Models [64.22311189896888]
We study exact memorization in causal and masked language modeling, across model sizes and throughout the training process.
Surprisingly, we show that larger models can memorize a larger portion of the data before over-fitting and tend to forget less throughout the training process.
arXiv Detail & Related papers (2022-05-22T07:43:50Z) - Quantifying Memorization Across Neural Language Models [61.58529162310382]
Large language models (LMs) have been shown to memorize parts of their training data, and when prompted appropriately, they will emit the memorized data verbatim.
This is undesirable because memorization violates privacy (exposing user data), degrades utility (repeated easy-to-memorize text is often low quality), and hurts fairness (some texts are memorized over others).
We describe three log-linear relationships that quantify the degree to which LMs emit memorized training data.
arXiv Detail & Related papers (2022-02-15T18:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.