Unveiling Memorization in Code Models
- URL: http://arxiv.org/abs/2308.09932v2
- Date: Fri, 12 Jan 2024 02:39:25 GMT
- Title: Unveiling Memorization in Code Models
- Authors: Zhou Yang, Zhipeng Zhao, Chenyu Wang, Jieke Shi, Dongsun Kim, DongGyun
Han, David Lo
- Abstract summary: A code model memorizes and produces source code verbatim, which potentially contains vulnerabilities, sensitive information, or code with strict licenses.
This paper investigates what extent do code models memorize their training data?
We build a taxonomy of memorized contents with 3 categories and 14 subcategories.
- Score: 13.867618700182486
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The availability of large-scale datasets, advanced architectures, and
powerful computational resources have led to effective code models that
automate diverse software engineering activities. The datasets usually consist
of billions of lines of code from both open-source and private repositories. A
code model memorizes and produces source code verbatim, which potentially
contains vulnerabilities, sensitive information, or code with strict licenses,
leading to potential security and privacy issues. This paper investigates an
important problem: to what extent do code models memorize their training data?
We conduct an empirical study to explore memorization in large pre-trained code
models. Our study highlights that simply extracting 20,000 outputs (each having
512 tokens) from a code model can produce over 40,125 code snippets that are
memorized from the training data. To provide a better understanding, we build a
taxonomy of memorized contents with 3 categories and 14 subcategories. The
results show that the prompts sent to the code models affect the distribution
of memorized contents. We identify several key factors of memorization.
Specifically, given the same architecture, larger models suffer more from
memorization problems. A code model produces more memorization when it is
allowed to generate longer outputs. We also find a strong positive correlation
between the number of an output's occurrences in the training data and that in
the generated outputs, which indicates that a potential way to reduce
memorization is to remove duplicates in the training data. We then identify
effective metrics that infer whether an output contains memorization
accurately. We also make suggestions to deal with memorization.
Related papers
- Rethinking LLM Memorization through the Lens of Adversarial Compression [93.13830893086681]
Large language models (LLMs) trained on web-scale datasets raise substantial concerns regarding permissible data usage.
One major question is whether these models "memorize" all their training data or they integrate many data sources in some way more akin to how a human would learn and synthesize information.
We propose the Adversarial Compression Ratio (ACR) as a metric for assessing memorization in LLMs.
arXiv Detail & Related papers (2024-04-23T15:49:37Z) - Traces of Memorisation in Large Language Models for Code [16.125924759649106]
Large language models for code are commonly trained on large unsanitised corpora of source code scraped from the internet.
We compare the rate of memorisation with large language models trained on natural language.
We find that large language models for code are vulnerable to data extraction attacks, like their natural language counterparts.
arXiv Detail & Related papers (2023-12-18T19:12:58Z) - SoK: Memorization in General-Purpose Large Language Models [25.448127387943053]
Large Language Models (LLMs) are advancing at a remarkable pace, with myriad applications under development.
LLMs can memorize short secrets in the training data, but can also memorize concepts like facts or writing styles that can be expressed in text in many different ways.
We propose a taxonomy for memorization in LLMs that covers verbatim text, facts, ideas and algorithms, writing styles, distributional properties, and alignment goals.
arXiv Detail & Related papers (2023-10-24T14:25:53Z) - Exploring Memorization in Fine-tuned Language Models [53.52403444655213]
We conduct the first comprehensive analysis to explore language models' memorization during fine-tuning across tasks.
Our studies with open-sourced and our own fine-tuned LMs across various tasks indicate that memorization presents a strong disparity among different fine-tuning tasks.
We provide an intuitive explanation of this task disparity via sparse coding theory and unveil a strong correlation between memorization and attention score distribution.
arXiv Detail & Related papers (2023-10-10T15:41:26Z) - Quantifying and Analyzing Entity-level Memorization in Large Language
Models [4.59914731734176]
Large language models (LLMs) have been proven capable of memorizing their training data.
Privacy risks arising from memorization have attracted increasing attention.
We propose a fine-grained, entity-level definition to quantify memorization with conditions and metrics closer to real-world scenarios.
arXiv Detail & Related papers (2023-08-30T03:06:47Z) - Measures of Information Reflect Memorization Patterns [53.71420125627608]
We show that the diversity in the activation patterns of different neurons is reflective of model generalization and memorization.
Importantly, we discover that information organization points to the two forms of memorization, even for neural activations computed on unlabelled in-distribution examples.
arXiv Detail & Related papers (2022-10-17T20:15:24Z) - Memorization Without Overfitting: Analyzing the Training Dynamics of
Large Language Models [64.22311189896888]
We study exact memorization in causal and masked language modeling, across model sizes and throughout the training process.
Surprisingly, we show that larger models can memorize a larger portion of the data before over-fitting and tend to forget less throughout the training process.
arXiv Detail & Related papers (2022-05-22T07:43:50Z) - Quantifying Memorization Across Neural Language Models [61.58529162310382]
Large language models (LMs) have been shown to memorize parts of their training data, and when prompted appropriately, they will emit the memorized data verbatim.
This is undesirable because memorization violates privacy (exposing user data), degrades utility (repeated easy-to-memorize text is often low quality), and hurts fairness (some texts are memorized over others).
We describe three log-linear relationships that quantify the degree to which LMs emit memorized training data.
arXiv Detail & Related papers (2022-02-15T18:48:31Z) - Counterfactual Memorization in Neural Language Models [91.8747020391287]
Modern neural language models that are widely used in various NLP tasks risk memorizing sensitive information from their training data.
An open question in previous studies of language model memorization is how to filter out "common" memorization.
We formulate a notion of counterfactual memorization which characterizes how a model's predictions change if a particular document is omitted during training.
arXiv Detail & Related papers (2021-12-24T04:20:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.