From Teacher to Colleague: How Coding Experience Shapes Developer Perceptions of AI Tools
- URL: http://arxiv.org/abs/2504.13903v1
- Date: Tue, 08 Apr 2025 08:58:06 GMT
- Title: From Teacher to Colleague: How Coding Experience Shapes Developer Perceptions of AI Tools
- Authors: Ilya Zakharov, Ekaterina Koshchenko, Agnia Sergeyuk,
- Abstract summary: AI-assisted development tools promise productivity gains and improved code quality, yet their adoption among developers remains inconsistent.<n>We analyze survey data from 3380 developers to examine how coding experience relates to AI awareness, adoption, and the roles developers assign to AI in their workflow.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AI-assisted development tools promise productivity gains and improved code quality, yet their adoption among developers remains inconsistent. Prior research suggests that professional expertise influences technology adoption, but its role in shaping developers' perceptions of AI tools is unclear. We analyze survey data from 3380 developers to examine how coding experience relates to AI awareness, adoption, and the roles developers assign to AI in their workflow. Our findings reveal that coding experience does not predict AI adoption but significantly influences mental models of AI's role. Experienced developers are more likely to perceive AI as a junior colleague, a content generator, or assign it no role, whereas less experienced developers primarily view AI as a teacher. These insights suggest that AI tools must align with developers' expertise levels to drive meaningful adoption.
Related papers
- On Developers' Self-Declaration of AI-Generated Code: An Analysis of Practices [2.655152359733829]
This study aims to understand the ways developers use to self-declare AI-generated code.
We collected 613 instances of AI-generated code snippets from GitHub.
Our research revealed the practices followed by developers to self-declare AI-generated code.
arXiv Detail & Related papers (2025-04-23T07:52:39Z) - Do Comments and Expertise Still Matter? An Experiment on Programmers' Adoption of AI-Generated JavaScript Code [8.436321697240682]
The adoption of AI-generated code was gauged by code similarity between AI-generated solutions and participants' submitted solutions.<n>Our findings revealed that the presence of comments significantly influences programmers' adoption of AI-generated code regardless of the participants' development expertise.
arXiv Detail & Related papers (2025-03-14T14:42:51Z) - Towards Decoding Developer Cognition in the Age of AI Assistants [9.887133861477233]
We propose a controlled observational study combining physiological measurements (EEG and eye tracking) with interaction data to examine developers' use of AI-assisted programming tools.
We will recruit professional developers to complete programming tasks both with and without AI assistance while measuring their cognitive load and task completion time.
arXiv Detail & Related papers (2025-01-05T23:25:21Z) - Does Co-Development with AI Assistants Lead to More Maintainable Code? A Registered Report [6.7428644467224]
This study aims to examine the influence of AI assistants on software maintainability.
In Phase 1, developers will add a new feature to a Java project, with or without the aid of an AI assistant.
In Phase 2, a randomized controlled trial, will involve a different set of developers evolving random Phase 1 projects - working without AI assistants.
arXiv Detail & Related papers (2024-08-20T11:48:42Z) - Using AI-Based Coding Assistants in Practice: State of Affairs, Perceptions, and Ways Forward [9.177785129949]
We aim to better understand how specifically developers are using AI assistants.
We carried out a large-scale survey aimed at how AI assistants are used.
arXiv Detail & Related papers (2024-06-11T23:10:43Z) - Bootstrapping Developmental AIs: From Simple Competences to Intelligent
Human-Compatible AIs [0.0]
The mainstream AIs approaches are the generative and deep learning approaches with large language models (LLMs) and the manually constructed symbolic approach.
This position paper lays out the prospects, gaps, and challenges for extending the practice of developmental AIs to create resilient, intelligent, and human-compatible AIs.
arXiv Detail & Related papers (2023-08-08T21:14:21Z) - Generation Probabilities Are Not Enough: Uncertainty Highlighting in AI Code Completions [54.55334589363247]
We study whether conveying information about uncertainty enables programmers to more quickly and accurately produce code.
We find that highlighting tokens with the highest predicted likelihood of being edited leads to faster task completion and more targeted edits.
arXiv Detail & Related papers (2023-02-14T18:43:34Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork [54.309495231017344]
We argue that AI systems should be trained in a human-centered manner, directly optimized for team performance.
We study this proposal for a specific type of human-AI teaming, where the human overseer chooses to either accept the AI recommendation or solve the task themselves.
Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the most accuracy AI may not lead to highest team performance.
arXiv Detail & Related papers (2020-04-27T19:06:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.