Gotcha! This Model Uses My Code! Evaluating Membership Leakage Risks in Code Models
- URL: http://arxiv.org/abs/2310.01166v2
- Date: Tue, 15 Oct 2024 12:58:00 GMT
- Title: Gotcha! This Model Uses My Code! Evaluating Membership Leakage Risks in Code Models
- Authors: Zhou Yang, Zhipeng Zhao, Chenyu Wang, Jieke Shi, Dongsum Kim, Donggyun Han, David Lo,
- Abstract summary: We propose Gotcha, a novel membership inference attack method specifically for code models.
We show that Gotcha can predict the data membership with a high true positive rate of 0.95 and a low false positive rate of 0.10.
This study calls for more attention to understanding the privacy of code models.
- Score: 12.214474083372389
- License:
- Abstract: Given large-scale source code datasets available in open-source projects and advanced large language models, recent code models have been proposed to address a series of critical software engineering tasks, such as program repair and code completion. The training data of the code models come from various sources, not only the publicly available source code, e.g., open-source projects on GitHub but also the private data such as the confidential source code from companies, which may contain sensitive information (for example, SSH keys and personal information). As a result, the use of these code models may raise new privacy concerns. In this paper, we focus on a critical yet not well-explored question on using code models: what is the risk of membership information leakage in code models? Membership information leakage refers to the risk that an attacker can infer whether a given data point is included in (i.e., a member of) the training data. To answer this question, we propose Gotcha, a novel membership inference attack method specifically for code models. We investigate the membership leakage risk of code models. Our results reveal a worrying fact that the risk of membership leakage is high: although the previous attack methods are close to random guessing, Gotcha can predict the data membership with a high true positive rate of 0.95 and a low false positive rate of 0.10. We also show that the attacker's knowledge of the victim model (e.g., the model architecture and the pre-training data) impacts the success rate of attacks. Further analysis demonstrates that changing the decoding strategy can mitigate the risk of membership leakage. This study calls for more attention to understanding the privacy of code models and developing more effective countermeasures against such attacks.
Related papers
- A Disguised Wolf Is More Harmful Than a Toothless Tiger: Adaptive Malicious Code Injection Backdoor Attack Leveraging User Behavior as Triggers [15.339528712960021]
We first present the game-theoretic model that focuses on security issues in code generation scenarios.
This framework outlines possible scenarios and patterns where attackers could spread malicious code models to create security threats.
We also pointed out for the first time that the attackers can use backdoor attacks to dynamically adjust the timing of malicious code injection.
arXiv Detail & Related papers (2024-08-19T18:18:04Z) - Does Your Neural Code Completion Model Use My Code? A Membership Inference Approach [66.51005288743153]
We investigate the legal and ethical issues of current neural code completion models.
We tailor a membership inference approach (termed CodeMI) that was originally crafted for classification tasks.
We evaluate the effectiveness of this adapted approach across a diverse array of neural code completion models.
arXiv Detail & Related papers (2024-04-22T15:54:53Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Poisoning Programs by Un-Repairing Code: Security Concerns of
AI-generated Code [0.9790236766474201]
We identify a novel data poisoning attack that results in the generation of vulnerable code.
We then devise an extensive evaluation of how these attacks impact state-of-the-art models for code generation.
arXiv Detail & Related papers (2024-03-11T12:47:04Z) - Vulnerabilities in AI Code Generators: Exploring Targeted Data Poisoning
Attacks [9.386731514208149]
This work investigates the security of AI code generators by devising a targeted data poisoning strategy.
We poison the training data by injecting increasing amounts of code containing security vulnerabilities.
Our study shows that AI code generators are vulnerable to even a small amount of poison.
arXiv Detail & Related papers (2023-08-04T15:23:30Z) - PEOPL: Characterizing Privately Encoded Open Datasets with Public Labels [59.66777287810985]
We introduce information-theoretic scores for privacy and utility, which quantify the average performance of an unfaithful user.
We then theoretically characterize primitives in building families of encoding schemes that motivate the use of random deep neural networks.
arXiv Detail & Related papers (2023-03-31T18:03:53Z) - CodeLMSec Benchmark: Systematically Evaluating and Finding Security
Vulnerabilities in Black-Box Code Language Models [58.27254444280376]
Large language models (LLMs) for automatic code generation have achieved breakthroughs in several programming tasks.
Training data for these models is usually collected from the Internet (e.g., from open-source repositories) and is likely to contain faults and security vulnerabilities.
This unsanitized training data can cause the language models to learn these vulnerabilities and propagate them during the code generation procedure.
arXiv Detail & Related papers (2023-02-08T11:54:07Z) - TrojanPuzzle: Covertly Poisoning Code-Suggestion Models [27.418320728203387]
We show two attacks that can bypass static analysis by planting malicious poison data in out-of-context regions such as docstrings.
Our most novel attack, TROJANPUZZLE, goes one step further in generating less suspicious poison data by never explicitly including certain (suspicious) parts of the payload in the poison data.
arXiv Detail & Related papers (2023-01-06T00:37:25Z) - MOVE: Effective and Harmless Ownership Verification via Embedded
External Features [109.19238806106426]
We propose an effective and harmless model ownership verification (MOVE) to defend against different types of model stealing simultaneously.
We conduct the ownership verification by verifying whether a suspicious model contains the knowledge of defender-specified external features.
In particular, we develop our MOVE method under both white-box and black-box settings to provide comprehensive model protection.
arXiv Detail & Related papers (2022-08-04T02:22:29Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.