Reassessing Code Authorship Attribution in the Era of Language Models
- URL: http://arxiv.org/abs/2506.17120v1
- Date: Fri, 20 Jun 2025 16:19:30 GMT
- Title: Reassessing Code Authorship Attribution in the Era of Language Models
- Authors: Atish Kumar Dipongkor, Ziyu Yao, Kevin Moran,
- Abstract summary: This study aims to analyze coding styles to identify the authors of code samples.<n>Code Authorship Attribution (CAA) is crucial in cybersecurity and software for addressing, detecting plagiarism, and supporting criminal prosecutions.
- Score: 12.590406993068523
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The study of Code Stylometry, and in particular Code Authorship Attribution (CAA), aims to analyze coding styles to identify the authors of code samples. CAA is crucial in cybersecurity and software forensics for addressing, detecting plagiarism, and supporting criminal prosecutions. However, CAA is a complex and error prone task, due to the need for recognizing nuanced relationships between coding patterns. This challenge is compounded in large software systems with numerous authors due to the subtle variability of patterns that signify the coding style of one author among many. Given the challenges related to this task, researchers have proposed and studied automated approaches that rely upon classical Machine Learning and Deep Learning techniques. However, such techniques have historically relied upon hand-crafted features, and due to the often intricate interaction of different features (e.g., formatting, etc.), have key limitations in properly characterizing authorship, and are sensitive to adversarial code perturbations. Recently, transformer-based Language Models (LMs) have shown remarkable efficacy across a range of software engineering tasks, and in the authorship attribution on natural language in the NLP domain. However, their effectiveness in CAA is not well understood. As such, we conduct the first extensive empirical study applying two larger state-of-the-art code LMs, and five smaller code LMs to the task of CAA to 6 diverse datasets that encompass 12k code snippets written by 463 developers. Furthermore, we perform an in-depth analysis of our studied models' performance on CAA using established machine learning interpretability techniques. The results of our analysis illustrate important findings that illuminate the behavior of LMs in understanding stylometric code patterns during the task of CAA, and point towards important directions for future work.
Related papers
- How Does LLM Reasoning Work for Code? A Survey and a Call to Action [15.390359698398283]
Large language models (LLMs) have extended into the domain of code, facilitating complex tasks such as code generation, translation, summarization, and repair.<n>Their utility for real-world deployment in-the-wild has only recently been studied, particularly on software engineering (SWE) tasks such as GitHub issue resolution.<n>In this study, we examine the code reasoning techniques that underlie the ability to perform such tasks, and examine the paradigms used to drive their performance.
arXiv Detail & Related papers (2025-06-16T19:18:09Z) - Is Compression Really Linear with Code Intelligence? [60.123628177110206]
textitFormat Annealing is a lightweight, transparent training methodology designed to assess the intrinsic capabilities of pre-trained models equitably.<n>Our empirical results reveal a fundamental logarithmic relationship between measured code intelligence and bits-per-character (BPC)<n>Our work provides a more nuanced understanding of compression's role in developing code intelligence and contributes a robust evaluation framework in the code domain.
arXiv Detail & Related papers (2025-05-16T16:59:14Z) - The Code Barrier: What LLMs Actually Understand? [7.407441962359689]
This research uses code obfuscation as a structured testing framework to evaluate semantic understanding capabilities of language models.<n>Findings show a statistically significant performance decline as obfuscation complexity increases.<n>This research introduces a new evaluation approach for assessing code comprehension in language models.
arXiv Detail & Related papers (2025-04-14T14:11:26Z) - An Empirical Study on Automatically Detecting AI-Generated Source Code: How Far Are We? [8.0988059417354]
We propose a range of approaches to improve the performance of AI-generated code detection.
Our best model outperforms state-of-the-art AI-generated code detector (GPTSniffer) and achieves an F1 score of 82.55.
arXiv Detail & Related papers (2024-11-06T22:48:18Z) - SIaM: Self-Improving Code-Assisted Mathematical Reasoning of Large Language Models [54.78329741186446]
We propose a novel paradigm that uses a code-based critic model to guide steps including question-code data construction, quality control, and complementary evaluation.
Experiments across both in-domain and out-of-domain benchmarks in English and Chinese demonstrate the effectiveness of the proposed paradigm.
arXiv Detail & Related papers (2024-08-28T06:33:03Z) - What's Wrong with Your Code Generated by Large Language Models? An Extensive Study [80.18342600996601]
Large language models (LLMs) produce code that is shorter yet more complicated as compared to canonical solutions.
We develop a taxonomy of bugs for incorrect codes that includes three categories and 12 sub-categories, and analyze the root cause for common bug types.
We propose a novel training-free iterative method that introduces self-critique, enabling LLMs to critique and correct their generated code based on bug types and compiler feedback.
arXiv Detail & Related papers (2024-07-08T17:27:17Z) - AuthAttLyzer-V2: Unveiling Code Authorship Attribution using Enhanced Ensemble Learning Models & Generating Benchmark Dataset [0.0]
Source Code Authorship Attribution (SCAA) is crucial for software classification because it provides insights into the origin and behavior of software.
This paper presents AuthAttLyzer-V2, a new source code feature extractor for SCAA, focusing on lexical, semantic, syntactic, and N-gram features.
arXiv Detail & Related papers (2024-06-28T13:04:16Z) - CodeIP: A Grammar-Guided Multi-Bit Watermark for Large Language Models of Code [56.019447113206006]
Large Language Models (LLMs) have achieved remarkable progress in code generation.<n>CodeIP is a novel multi-bit watermarking technique that inserts additional information to preserve provenance details.<n>Experiments conducted on a real-world dataset across five programming languages demonstrate the effectiveness of CodeIP.
arXiv Detail & Related papers (2024-04-24T04:25:04Z) - Creating a Trajectory for Code Writing: Algorithmic Reasoning Tasks [0.923607423080658]
This paper describes instruments and the machine learning models used for validating them.
We have used the data collected in an introductory programming course in the penultimate week of the semester.
Preliminary research suggests ART type instruments can be combined with specific machine learning models to act as an effective learning trajectory.
arXiv Detail & Related papers (2024-04-03T05:07:01Z) - A Thorough Examination of Decoding Methods in the Era of LLMs [72.65956436513241]
Decoding methods play an indispensable role in converting language models from next-token predictors into practical task solvers.
This paper provides a comprehensive and multifaceted analysis of various decoding methods within the context of large language models.
Our findings reveal that decoding method performance is notably task-dependent and influenced by factors such as alignment, model size, and quantization.
arXiv Detail & Related papers (2024-02-10T11:14:53Z) - Software Vulnerability Detection via Deep Learning over Disaggregated
Code Graph Representation [57.92972327649165]
This work explores a deep learning approach to automatically learn the insecure patterns from code corpora.
Because code naturally admits graph structures with parsing, we develop a novel graph neural network (GNN) to exploit both the semantic context and structural regularity of a program.
arXiv Detail & Related papers (2021-09-07T21:24:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.