An Empirical Study of Memorization in NLP
- URL: http://arxiv.org/abs/2203.12171v1
- Date: Wed, 23 Mar 2022 03:27:56 GMT
- Title: An Empirical Study of Memorization in NLP
- Authors: Xiaosen Zheng and Jing Jiang
- Abstract summary: We use three different NLP tasks to check if the long-tail theory holds.
Experiments demonstrate that top-ranked memorized training instances are likely atypical.
We develop an attribution method to better understand why a training instance is memorized.
- Score: 8.293936347234126
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A recent study by Feldman (2020) proposed a long-tail theory to explain the
memorization behavior of deep learning models. However, memorization has not
been empirically verified in the context of NLP, a gap addressed by this work.
In this paper, we use three different NLP tasks to check if the long-tail
theory holds. Our experiments demonstrate that top-ranked memorized training
instances are likely atypical, and removing the top-memorized training
instances leads to a more serious drop in test accuracy compared with removing
training instances randomly. Furthermore, we develop an attribution method to
better understand why a training instance is memorized. We empirically show
that our memorization attribution method is faithful, and share our interesting
finding that the top-memorized parts of a training instance tend to be features
negatively correlated with the class label.
Related papers
- Predicting and analyzing memorization within fine-tuned Large Language Models [0.0]
Large Language Models memorize a significant proportion of their training data, posing a serious threat when disclosed at inference time.
We propose a new approach based on sliced mutual information to detect memorized samples a priori.
We obtain strong empirical results, paving the way for systematic inspection and protection of these vulnerable samples before memorization happens.
arXiv Detail & Related papers (2024-09-27T15:53:55Z) - Causal Estimation of Memorisation Profiles [58.20086589761273]
Understanding memorisation in language models has practical and societal implications.
Memorisation is the causal effect of training with an instance on the model's ability to predict that instance.
This paper proposes a new, principled, and efficient method to estimate memorisation based on the difference-in-differences design from econometrics.
arXiv Detail & Related papers (2024-06-06T17:59:09Z) - Exploring Memorization in Fine-tuned Language Models [53.52403444655213]
We conduct the first comprehensive analysis to explore language models' memorization during fine-tuning across tasks.
Our studies with open-sourced and our own fine-tuned LMs across various tasks indicate that memorization presents a strong disparity among different fine-tuning tasks.
We provide an intuitive explanation of this task disparity via sparse coding theory and unveil a strong correlation between memorization and attention score distribution.
arXiv Detail & Related papers (2023-10-10T15:41:26Z) - Measures of Information Reflect Memorization Patterns [53.71420125627608]
We show that the diversity in the activation patterns of different neurons is reflective of model generalization and memorization.
Importantly, we discover that information organization points to the two forms of memorization, even for neural activations computed on unlabelled in-distribution examples.
arXiv Detail & Related papers (2022-10-17T20:15:24Z) - Quantifying Memorization Across Neural Language Models [61.58529162310382]
Large language models (LMs) have been shown to memorize parts of their training data, and when prompted appropriately, they will emit the memorized data verbatim.
This is undesirable because memorization violates privacy (exposing user data), degrades utility (repeated easy-to-memorize text is often low quality), and hurts fairness (some texts are memorized over others).
We describe three log-linear relationships that quantify the degree to which LMs emit memorized training data.
arXiv Detail & Related papers (2022-02-15T18:48:31Z) - Counterfactual Memorization in Neural Language Models [91.8747020391287]
Modern neural language models that are widely used in various NLP tasks risk memorizing sensitive information from their training data.
An open question in previous studies of language model memorization is how to filter out "common" memorization.
We formulate a notion of counterfactual memorization which characterizes how a model's predictions change if a particular document is omitted during training.
arXiv Detail & Related papers (2021-12-24T04:20:57Z) - Exploring Memorization in Adversarial Training [58.38336773082818]
We investigate the memorization effect in adversarial training (AT) for promoting a deeper understanding of capacity, convergence, generalization, and especially robust overfitting.
We propose a new mitigation algorithm motivated by detailed memorization analyses.
arXiv Detail & Related papers (2021-06-03T05:39:57Z) - Memory-Associated Differential Learning [10.332918082271153]
We propose a novel learning paradigm called Memory-Associated Differential (MAD) Learning.
We first introduce an additional component called Memory to memorize all the training data. Then we learn the differences of labels as well as the associations of features in the combination of a differential equation and some sampling methods.
In the evaluating phase, we predict unknown labels by inferencing from the memorized facts plus the learnt differences and associations in a geometrically meaningful manner.
arXiv Detail & Related papers (2021-02-10T03:48:12Z) - What Neural Networks Memorize and Why: Discovering the Long Tail via
Influence Estimation [37.5845376458136]
Deep learning algorithms are well-known to have a propensity for fitting the training data very well.
Such fitting requires memorization of training data labels.
We propose a theoretical explanation for this phenomenon based on a combination of two insights.
arXiv Detail & Related papers (2020-08-09T10:12:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.