Extending Memorization Dynamics in Pythia Models from Instance-Level Insights
- URL: http://arxiv.org/abs/2506.12321v1
- Date: Sat, 14 Jun 2025 03:02:42 GMT
- Title: Extending Memorization Dynamics in Pythia Models from Instance-Level Insights
- Authors: Jie Zhang, Qinghua Zhao, Lei Li, Chi-ho Lin,
- Abstract summary: This paper presents a detailed analysis of memorization in the Pythia model family across varying scales and training steps.<n>Using granular metrics, we examine how model architecture, data characteristics, and perturbations influence memorization patterns.
- Score: 8.476099189609565
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models have demonstrated a remarkable ability for verbatim memorization. While numerous works have explored factors influencing model memorization, the dynamic evolution memorization patterns remains underexplored. This paper presents a detailed analysis of memorization in the Pythia model family across varying scales and training steps under prefix perturbations. Using granular metrics, we examine how model architecture, data characteristics, and perturbations influence these patterns. Our findings reveal that: (1) as model scale increases, memorization expands incrementally while efficiency decreases rapidly; (2) as model scale increases, the rate of new memorization acquisition decreases while old memorization forgetting increases; (3) data characteristics (token frequency, repetition count, and uncertainty) differentially affect memorized versus non-memorized samples; and (4) prefix perturbations reduce memorization and increase generation uncertainty proportionally to perturbation strength, with low-redundancy samples showing higher vulnerability and larger models offering no additional robustness. These findings advance our understanding of memorization mechanisms, with direct implications for training optimization, privacy safeguards, and architectural improvements.
Related papers
- Memorization in Fine-Tuned Large Language Models [0.0]
This study investigates the mechanisms and factors influencing memorization in fine-tuned large language models (LLMs)<n>We examine how different aspects of the fine-tuning process affect a model's propensity to memorize training data, using the PHEE dataset of pharmacovigilance events.
arXiv Detail & Related papers (2025-07-28T17:22:10Z) - The emergence of sparse attention: impact of data distribution and benefits of repetition [14.652502263025882]
We study the emergence over training of sparse attention, a critical and frequently observed attention pattern in Transformers.<n>By combining theoretical analysis of a toy model with empirical observations on small Transformers trained on a linear regression variant, we uncover the mechanics sparse attention emergence.<n>Our findings provide a simple, theoretically grounded framework for understanding how data distributions and model design influence the learning dynamics behind one form of emergence.
arXiv Detail & Related papers (2025-05-23T13:14:02Z) - Bigger Isn't Always Memorizing: Early Stopping Overparameterized Diffusion Models [51.03144354630136]
Generalization in natural data domains is progressively achieved during training before the onset of memorization.<n>Generalization vs. memorization is then best understood as a competition between time scales.<n>We show that this phenomenology is recovered in diffusion models learning a simple probabilistic context-free grammar with random rules.
arXiv Detail & Related papers (2025-05-22T17:40:08Z) - Stable Hadamard Memory: Revitalizing Memory-Augmented Agents for Reinforcement Learning [64.93848182403116]
Current deep-learning memory models struggle in reinforcement learning environments that are partially observable and long-term.
We introduce the Stable Hadamard Memory, a novel memory model for reinforcement learning agents.
Our approach significantly outperforms state-of-the-art memory-based methods on challenging partially observable benchmarks.
arXiv Detail & Related papers (2024-10-14T03:50:17Z) - Uncovering Latent Memories: Assessing Data Leakage and Memorization Patterns in Frontier AI Models [7.50189359952191]
We show that sequences which are not memorized after the first encounter can be "uncovered" throughout the course of training.
The presence of latent memorization presents a challenge for data privacy as memorized sequences may be hidden at the final checkpoint of the model.
We develop a diagnostic test relying on the cross entropy loss to uncover latent memorized sequences with high accuracy.
arXiv Detail & Related papers (2024-06-20T17:56:17Z) - Causal Estimation of Memorisation Profiles [58.20086589761273]
Understanding memorisation in language models has practical and societal implications.
Memorisation is the causal effect of training with an instance on the model's ability to predict that instance.
This paper proposes a new, principled, and efficient method to estimate memorisation based on the difference-in-differences design from econometrics.
arXiv Detail & Related papers (2024-06-06T17:59:09Z) - What do larger image classifiers memorise? [64.01325988398838]
We show that training examples exhibit an unexpectedly diverse set of memorisation trajectories across model sizes.
We find that knowledge distillation, an effective and popular model compression technique, tends to inhibit memorisation, while also improving generalisation.
arXiv Detail & Related papers (2023-10-09T01:52:07Z) - On Memorization in Diffusion Models [44.031805633114985]
We show that memorization behaviors tend to occur on smaller-sized datasets.<n>We quantify the impact of the influential factors on these memorization behaviors in terms of effective model memorization (EMM)<n>Our study holds practical significance for diffusion model users and offers clues to theoretical research in deep generative models.
arXiv Detail & Related papers (2023-10-04T09:04:20Z) - Memorization Without Overfitting: Analyzing the Training Dynamics of
Large Language Models [64.22311189896888]
We study exact memorization in causal and masked language modeling, across model sizes and throughout the training process.
Surprisingly, we show that larger models can memorize a larger portion of the data before over-fitting and tend to forget less throughout the training process.
arXiv Detail & Related papers (2022-05-22T07:43:50Z) - Quantifying Memorization Across Neural Language Models [61.58529162310382]
Large language models (LMs) have been shown to memorize parts of their training data, and when prompted appropriately, they will emit the memorized data verbatim.
This is undesirable because memorization violates privacy (exposing user data), degrades utility (repeated easy-to-memorize text is often low quality), and hurts fairness (some texts are memorized over others).
We describe three log-linear relationships that quantify the degree to which LMs emit memorized training data.
arXiv Detail & Related papers (2022-02-15T18:48:31Z) - On Memorization in Probabilistic Deep Generative Models [4.987581730476023]
Recent advances in deep generative models have led to impressive results in a variety of application domains.
Motivated by the possibility that deep learning models might memorize part of the input data, there have been increased efforts to understand how memorization can occur.
arXiv Detail & Related papers (2021-06-06T19:33:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.