Do Comments and Expertise Still Matter? An Experiment on Programmers' Adoption of AI-Generated JavaScript Code
- URL: http://arxiv.org/abs/2503.11453v1
- Date: Fri, 14 Mar 2025 14:42:51 GMT
- Title: Do Comments and Expertise Still Matter? An Experiment on Programmers' Adoption of AI-Generated JavaScript Code
- Authors: Changwen Li, Christoph Treude, Ofir Turel,
- Abstract summary: The adoption of AI-generated code was gauged by code similarity between AI-generated solutions and participants' submitted solutions.<n>Our findings revealed that the presence of comments significantly influences programmers' adoption of AI-generated code regardless of the participants' development expertise.
- Score: 8.436321697240682
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper investigates the factors influencing programmers' adoption of AI-generated JavaScript code recommendations. It extends prior research by (1) utilizing objective (as opposed to the typically self-reported) measurements for programmers' adoption of AI-generated code and (2) examining whether AI-generated comments added to code recommendations and development expertise drive AI-generated code adoption. We tested these potential drivers in an online experiment with 173 programmers. Participants were asked to answer some questions to demonstrate their level of development expertise. Then, they were asked to solve a LeetCode problem without AI support. After attempting to solve the problem on their own, they received an AI-generated solution to assist them in refining their solutions. The solutions provided were manipulated to include or exclude AI-generated comments (a between-subjects factor). Programmers' adoption of AI-generated code was gauged by code similarity between AI-generated solutions and participants' submitted solutions, providing a more reliable and objective measurement of code adoption behaviors. Our findings revealed that the presence of comments significantly influences programmers' adoption of AI-generated code regardless of the participants' development expertise.
Related papers
- On Developers' Self-Declaration of AI-Generated Code: An Analysis of Practices [2.655152359733829]
This study aims to understand the ways developers use to self-declare AI-generated code.
We collected 613 instances of AI-generated code snippets from GitHub.
Our research revealed the practices followed by developers to self-declare AI-generated code.
arXiv Detail & Related papers (2025-04-23T07:52:39Z) - From Teacher to Colleague: How Coding Experience Shapes Developer Perceptions of AI Tools [0.0]
AI-assisted development tools promise productivity gains and improved code quality, yet their adoption among developers remains inconsistent.
We analyze survey data from 3380 developers to examine how coding experience relates to AI awareness, adoption, and the roles developers assign to AI in their workflow.
arXiv Detail & Related papers (2025-04-08T08:58:06Z) - Augmenting Human Cognition With Generative AI: Lessons From AI-Assisted Decision-Making [2.1680671785663654]
In both AI-assisted decision-making and generative AI, a popular approach is to suggest AI-generated end-to-end solutions to users.
Alternatively, AI tools could offer more incremental support to help users solve tasks themselves.
arXiv Detail & Related papers (2025-04-04T06:40:03Z) - How Do Programming Students Use Generative AI? [7.863638253070439]
We studied how programming students actually use generative AI tools like ChatGPT.<n>We observed two prevalent usage strategies: to seek knowledge about general concepts and to directly generate solutions.<n>Our findings indicate that concerns about potential decrease in programmers' agency and productivity with Generative AI are justified.
arXiv Detail & Related papers (2025-01-17T10:25:41Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Students' Perspective on AI Code Completion: Benefits and Challenges [2.936007114555107]
We investigated the benefits, challenges, and expectations of AI code completion from students' perspectives.
Our findings show that AI code completion enhanced students' productivity and efficiency by providing correct syntax suggestions.
In the future, AI code completion should be explainable and provide best coding practices to enhance the education process.
arXiv Detail & Related papers (2023-10-31T22:41:16Z) - Generation Probabilities Are Not Enough: Uncertainty Highlighting in AI Code Completions [54.55334589363247]
We study whether conveying information about uncertainty enables programmers to more quickly and accurately produce code.
We find that highlighting tokens with the highest predicted likelihood of being edited leads to faster task completion and more targeted edits.
arXiv Detail & Related papers (2023-02-14T18:43:34Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Ethics in AI through the Practitioner's View: A Grounded Theory
Literature Review [12.941478155592502]
In recent years, numerous incidents have raised the profile of ethical issues in AI development and led to public concerns about the proliferation of AI technology in our everyday lives.
We conducted a grounded theory literature review (GTLR) of 38 primary empirical studies that included AI practitioners' views on ethics in AI.
We present a taxonomy of ethics in AI from practitioners' viewpoints to assist AI practitioners in identifying and understanding the different aspects of AI ethics.
arXiv Detail & Related papers (2022-06-20T00:28:51Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - To Trust or to Think: Cognitive Forcing Functions Can Reduce
Overreliance on AI in AI-assisted Decision-making [4.877174544937129]
People supported by AI-powered decision support tools frequently overrely on the AI.
Adding explanations to the AI decisions does not appear to reduce the overreliance.
Our research suggests that human cognitive motivation moderates the effectiveness of explainable AI solutions.
arXiv Detail & Related papers (2021-02-19T00:38:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.