On Developers' Self-Declaration of AI-Generated Code: An Analysis of Practices
- URL: http://arxiv.org/abs/2504.16485v1
- Date: Wed, 23 Apr 2025 07:52:39 GMT
- Title: On Developers' Self-Declaration of AI-Generated Code: An Analysis of Practices
- Authors: Syed Mohammad Kashif, Peng Liang, Amjed Tahir,
- Abstract summary: This study aims to understand the ways developers use to self-declare AI-generated code.<n>We collected 613 instances of AI-generated code snippets from GitHub.<n>Our research revealed the practices followed by developers to self-declare AI-generated code.
- Score: 2.655152359733829
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI code generation tools have gained significant popularity among developers, who use them to assist in software development due to their capability to generate code. Existing studies mainly explored the quality, e.g., correctness and security, of AI-generated code, while in real-world software development, the prerequisite is to distinguish AI-generated code from human-written code, which emphasizes the need to explicitly declare AI-generated code by developers. To this end, this study intends to understand the ways developers use to self-declare AI-generated code and explore the reasons why developers choose to self-declare or not. We conducted a mixed-methods study consisting of two phases. In the first phase, we mined GitHub repositories and collected 613 instances of AI-generated code snippets. In the second phase, we conducted a follow-up industrial survey, which received 111 valid responses. Our research revealed the practices followed by developers to self-declare AI-generated code. Most practitioners (76.6%) always or sometimes self-declare AI-generated code. In contrast, other practitioners (23.4%) noted that they never self-declare AI-generated code. The reasons for self-declaring AI-generated code include the need to track and monitor the code for future review and debugging, and ethical considerations. The reasons for not self-declaring AI-generated code include extensive modifications to AI-generated code and the developers' perception that self-declaration is an unnecessary activity. We finally provided guidelines for practitioners to self-declare AI-generated code, addressing ethical and code quality concerns.
Related papers
- From Teacher to Colleague: How Coding Experience Shapes Developer Perceptions of AI Tools [0.0]
AI-assisted development tools promise productivity gains and improved code quality, yet their adoption among developers remains inconsistent.
We analyze survey data from 3380 developers to examine how coding experience relates to AI awareness, adoption, and the roles developers assign to AI in their workflow.
arXiv Detail & Related papers (2025-04-08T08:58:06Z) - Could AI Trace and Explain the Origins of AI-Generated Images and Text? [53.11173194293537]
AI-generated content is increasingly prevalent in the real world.<n> adversaries might exploit large multimodal models to create images that violate ethical or legal standards.<n>Paper reviewers may misuse large language models to generate reviews without genuine intellectual effort.
arXiv Detail & Related papers (2025-04-05T20:51:54Z) - Do Comments and Expertise Still Matter? An Experiment on Programmers' Adoption of AI-Generated JavaScript Code [8.436321697240682]
The adoption of AI-generated code was gauged by code similarity between AI-generated solutions and participants' submitted solutions.<n>Our findings revealed that the presence of comments significantly influences programmers' adoption of AI-generated code regardless of the participants' development expertise.
arXiv Detail & Related papers (2025-03-14T14:42:51Z) - The Shift from Writing to Pruning Software: A Bonsai-Inspired IDE for Reshaping AI Generated Code [11.149764135999437]
The rise of AI-driven coding assistants signals a fundamental shift in how software is built.<n>While AI coding assistants have been integrated into existing Integrated Development Environments, their full potential remains largely untapped.<n>We propose a new approach to IDEs, where AI is allowed to generate in its true, unconstrained form, free from traditional file structures.
arXiv Detail & Related papers (2025-03-04T17:57:26Z) - Almost AI, Almost Human: The Challenge of Detecting AI-Polished Writing [55.2480439325792]
Misclassification can lead to false plagiarism accusations and misleading claims about AI prevalence in online content.<n>We systematically evaluate eleven state-of-the-art AI-text detectors using our AI-Polished-Text Evaluation dataset.<n>Our findings reveal that detectors frequently misclassify even minimally polished text as AI-generated, struggle to differentiate between degrees of AI involvement, and exhibit biases against older and smaller models.
arXiv Detail & Related papers (2025-02-21T18:45:37Z) - An Empirical Study on Automatically Detecting AI-Generated Source Code: How Far Are We? [8.0988059417354]
We propose a range of approaches to improve the performance of AI-generated code detection.
Our best model outperforms state-of-the-art AI-generated code detector (GPTSniffer) and achieves an F1 score of 82.55.
arXiv Detail & Related papers (2024-11-06T22:48:18Z) - Does Co-Development with AI Assistants Lead to More Maintainable Code? A Registered Report [6.7428644467224]
This study aims to examine the influence of AI assistants on software maintainability.
In Phase 1, developers will add a new feature to a Java project, with or without the aid of an AI assistant.
In Phase 2, a randomized controlled trial, will involve a different set of developers evolving random Phase 1 projects - working without AI assistants.
arXiv Detail & Related papers (2024-08-20T11:48:42Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Students' Perspective on AI Code Completion: Benefits and Challenges [2.936007114555107]
We investigated the benefits, challenges, and expectations of AI code completion from students' perspectives.
Our findings show that AI code completion enhanced students' productivity and efficiency by providing correct syntax suggestions.
In the future, AI code completion should be explainable and provide best coding practices to enhance the education process.
arXiv Detail & Related papers (2023-10-31T22:41:16Z) - Generation Probabilities Are Not Enough: Uncertainty Highlighting in AI Code Completions [54.55334589363247]
We study whether conveying information about uncertainty enables programmers to more quickly and accurately produce code.
We find that highlighting tokens with the highest predicted likelihood of being edited leads to faster task completion and more targeted edits.
arXiv Detail & Related papers (2023-02-14T18:43:34Z) - Ethics in AI through the Practitioner's View: A Grounded Theory
Literature Review [12.941478155592502]
In recent years, numerous incidents have raised the profile of ethical issues in AI development and led to public concerns about the proliferation of AI technology in our everyday lives.
We conducted a grounded theory literature review (GTLR) of 38 primary empirical studies that included AI practitioners' views on ethics in AI.
We present a taxonomy of ethics in AI from practitioners' viewpoints to assist AI practitioners in identifying and understanding the different aspects of AI ethics.
arXiv Detail & Related papers (2022-06-20T00:28:51Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.