The Impact of Generative AI on Code Expertise Models: An Exploratory Study
- URL: http://arxiv.org/abs/2507.08160v1
- Date: Thu, 10 Jul 2025 20:43:08 GMT
- Title: The Impact of Generative AI on Code Expertise Models: An Exploratory Study
- Authors: Otávio Cury, Guilherme Avelino,
- Abstract summary: We present an exploratory analysis of how a knowledge model and a Truck Factor algorithm can be affected by GenAI usage.<n>Our findings suggest that as GenAI becomes more integrated into development, the reliability of such metrics may decrease.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Artificial Intelligence (GenAI) tools for source code generation have significantly boosted productivity in software development. However, they also raise concerns, particularly the risk that developers may rely heavily on these tools, reducing their understanding of the generated code. We hypothesize that this loss of understanding may be reflected in source code knowledge models, which are used to identify developer expertise. In this work, we present an exploratory analysis of how a knowledge model and a Truck Factor algorithm built upon it can be affected by GenAI usage. To investigate this, we collected statistical data on the integration of ChatGPT-generated code into GitHub projects and simulated various scenarios by adjusting the degree of GenAI contribution. Our findings reveal that most scenarios led to measurable impacts, indicating the sensitivity of current expertise metrics. This suggests that as GenAI becomes more integrated into development workflows, the reliability of such metrics may decrease.
Related papers
- Self-Admitted GenAI Usage in Open-Source Software [14.503048663131574]
We introduce the concept of self-admitted GenAI usage, that is, developers explicitly referring to the use of GenAI tools for content creation in software artifacts.<n>We analyze a curated sample of more than 250,000 GitHub repositories, identifying 1,292 such self-admissions across 156 repositories in commit messages, code comments, and project documentation.<n>Our findings reveal that developers actively manage how GenAI is used in their projects, highlighting the need for project-level transparency.
arXiv Detail & Related papers (2025-07-14T16:05:49Z) - What Needs Attention? Prioritizing Drivers of Developers' Trust and Adoption of Generative AI [18.1243411839447]
We developed a theoretical model of factors influencing trust and adoption intentions towards genAI.<n>We found that genAI's system/output quality, functional value, and goal maintenance significantly influence developers' trust.<n>We provide suggestions to guide future genAI tool design for effective, trustworthy, and inclusive human-genAI interactions.
arXiv Detail & Related papers (2025-05-23T03:05:56Z) - Information Retrieval in the Age of Generative AI: The RGB Model [77.96475639967431]
This paper presents a novel quantitative approach to shed light on the complex information dynamics arising from the growing use of generative AI tools.<n>We propose a model to characterize the generation, indexing, and dissemination of information in response to new topics.<n>Our findings suggest that the rapid pace of generative AI adoption, combined with increasing user reliance, can outpace human verification, escalating the risk of inaccurate information proliferation.
arXiv Detail & Related papers (2025-04-29T10:21:40Z) - Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.<n>We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.<n>We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - SOK: Exploring Hallucinations and Security Risks in AI-Assisted Software Development with Insights for LLM Deployment [0.0]
Large Language Models (LLMs) such as GitHub Copilot, ChatGPT, Cursor AI, and Codeium AI have revolutionized the coding landscape.<n>This paper provides a comprehensive analysis of the benefits and risks associated with AI-powered coding tools.
arXiv Detail & Related papers (2025-01-31T06:00:27Z) - Dear Diary: A randomized controlled trial of Generative AI coding tools in the workplace [2.5280615594444567]
Generative AI coding tools are relatively new, and their impact on developers extends beyond traditional coding metrics.
This study aims to illuminate developers' preexisting beliefs about generative AI tools, their self perceptions, and how regular use of these tools may alter these beliefs.
Our findings reveal that the introduction and sustained use of generative AI coding tools significantly increases developers' perceptions of these tools as both useful and enjoyable.
arXiv Detail & Related papers (2024-10-24T00:07:27Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Legal Aspects for Software Developers Interested in Generative AI Applications [5.772982243103395]
Generative Artificial Intelligence (GenAI) has led to new technologies capable of generating high-quality code, natural language, and images.
The next step is to integrate GenAI technology into products, a task typically conducted by software developers.
This article sheds light on the current state of two such risks: data protection and copyright.
arXiv Detail & Related papers (2024-04-25T14:17:34Z) - Generative AI Agent for Next-Generation MIMO Design: Fundamentals, Challenges, and Vision [76.4345564864002]
Next-generation multiple input multiple output (MIMO) is expected to be intelligent and scalable.
We propose the concept of the generative AI agent, which is capable of generating tailored and specialized contents.
We present two compelling case studies that demonstrate the effectiveness of leveraging the generative AI agent for performance analysis.
arXiv Detail & Related papers (2024-04-13T02:39:36Z) - GenLens: A Systematic Evaluation of Visual GenAI Model Outputs [33.93591473459988]
GenLens is a visual analytic interface designed for the systematic evaluation of GenAI model outputs.
A user study with model developers reveals that GenLens effectively enhances their workflow, evidenced by high satisfaction rates.
arXiv Detail & Related papers (2024-02-06T04:41:06Z) - LLM-based Interaction for Content Generation: A Case Study on the
Perception of Employees in an IT department [85.1523466539595]
This paper presents a questionnaire survey to identify the intention to use generative tools by employees of an IT company.
Our results indicate a rather average acceptability of generative tools, although the more useful the tool is perceived to be, the higher the intention seems to be.
Our analyses suggest that the frequency of use of generative tools is likely to be a key factor in understanding how employees perceive these tools in the context of their work.
arXiv Detail & Related papers (2023-04-18T15:35:43Z) - Principled Knowledge Extrapolation with GANs [92.62635018136476]
We study counterfactual synthesis from a new perspective of knowledge extrapolation.
We show that an adversarial game with a closed-form discriminator can be used to address the knowledge extrapolation problem.
Our method enjoys both elegant theoretical guarantees and superior performance in many scenarios.
arXiv Detail & Related papers (2022-05-21T08:39:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.