Dear Diary: A randomized controlled trial of Generative AI coding tools in the workplace
- URL: http://arxiv.org/abs/2410.18334v1
- Date: Thu, 24 Oct 2024 00:07:27 GMT
- Title: Dear Diary: A randomized controlled trial of Generative AI coding tools in the workplace
- Authors: Jenna Butler, Jina Suh, Sankeerti Haniyur, Constance Hadley,
- Abstract summary: Generative AI coding tools are relatively new, and their impact on developers extends beyond traditional coding metrics.
This study aims to illuminate developers' preexisting beliefs about generative AI tools, their self perceptions, and how regular use of these tools may alter these beliefs.
Our findings reveal that the introduction and sustained use of generative AI coding tools significantly increases developers' perceptions of these tools as both useful and enjoyable.
- Score: 2.5280615594444567
- License:
- Abstract: Generative AI coding tools are relatively new, and their impact on developers extends beyond traditional coding metrics, influencing beliefs about work and developers' roles in the workplace. This study aims to illuminate developers' preexisting beliefs about generative AI tools, their self perceptions, and how regular use of these tools may alter these beliefs. Using a mixed methods approach, including surveys, a randomized controlled trial, and a three week diary study, we explored the real world application of generative AI tools within a large multinational software company. Our findings reveal that the introduction and sustained use of generative AI coding tools significantly increases developers' perceptions of these tools as both useful and enjoyable. However, developers' views on the trustworthiness of AI generated code remained unchanged. We also discovered unexpected uses of these tools, such as replacing web searches and fostering creative ideation. Additionally, 84 percent of participants reported positive changes in their daily work practices, and 66 percent noted shifts in their feelings about their work, ranging from increased enthusiasm to heightened awareness of the need to stay current with technological advances. This research provides both qualitative and quantitative insights into the evolving role of generative AI in software development and offers practical recommendations for maximizing the benefits of this emerging technology, particularly in balancing the productivity gains from AI-generated code with the need for increased scrutiny and critical evaluation of its outputs.
Related papers
- "I Don't Use AI for Everything": Exploring Utility, Attitude, and Responsibility of AI-empowered Tools in Software Development [19.851794567529286]
This study investigates the adoption, impact, and security considerations of AI-empowered tools in the software development process.
Our findings reveal widespread adoption of AI tools across various stages of software development.
arXiv Detail & Related papers (2024-09-20T09:17:10Z) - The Impact of Generative AI-Powered Code Generation Tools on Software Engineer Hiring: Recruiters' Experiences, Perceptions, and Strategies [4.557635080377692]
This study explores recruiters' experiences and perceptions regarding GenAI-powered code generation tools.
Findings from our survey of 32 industry professionals indicate that although most participants are familiar with such tools, the majority of organizations have not adjusted their candidate evaluation methods to account for candidates' use/knowledge of these tools.
Most participants believe that it is important to incorporate GenAI-powered code generation tools into computer science curricula.
arXiv Detail & Related papers (2024-09-02T00:00:29Z) - Agent-Driven Automatic Software Improvement [55.2480439325792]
This research proposal aims to explore innovative solutions by focusing on the deployment of agents powered by Large Language Models (LLMs)
The iterative nature of agents, which allows for continuous learning and adaptation, can help surpass common challenges in code generation.
We aim to use the iterative feedback in these systems to further fine-tune the LLMs underlying the agents, becoming better aligned to the task of automated software improvement.
arXiv Detail & Related papers (2024-06-24T15:45:22Z) - The Role of Generative AI in Software Development Productivity: A Pilot Case Study [0.0]
This paper investigates the integration of generative AI tools within software development.
Through a pilot case study, we gathered valuable experiences on the integration of generative AI tools into their daily work routines.
Our findings reveal a generally positive perception of these tools in individual productivity while also highlighting the need to address identified limitations.
arXiv Detail & Related papers (2024-06-01T21:51:33Z) - Bridging Gaps, Building Futures: Advancing Software Developer Diversity and Inclusion Through Future-Oriented Research [50.545824691484796]
We present insights from SE researchers and practitioners on challenges and solutions regarding diversity and inclusion in SE.
We share potential utopian and dystopian visions of the future and provide future research directions and implications for academia and industry.
arXiv Detail & Related papers (2024-04-10T16:18:11Z) - SERL: A Software Suite for Sample-Efficient Robotic Reinforcement
Learning [85.21378553454672]
We develop a library containing a sample efficient off-policy deep RL method, together with methods for computing rewards and resetting the environment.
We find that our implementation can achieve very efficient learning, acquiring policies for PCB board assembly, cable routing, and object relocation.
These policies achieve perfect or near-perfect success rates, extreme robustness even under perturbations, and exhibit emergent robustness recovery and correction behaviors.
arXiv Detail & Related papers (2024-01-29T10:01:10Z) - Exploring the intersection of Generative AI and Software Development [0.0]
The synergy between generative AI and Software Engineering emerges as a transformative frontier.
This whitepaper delves into the unexplored realm, elucidating how generative AI techniques can revolutionize software development.
It serves as a guide for stakeholders, urging discussions and experiments in the application of generative AI in Software Engineering.
arXiv Detail & Related papers (2023-12-21T19:23:23Z) - LLM-based Interaction for Content Generation: A Case Study on the
Perception of Employees in an IT department [85.1523466539595]
This paper presents a questionnaire survey to identify the intention to use generative tools by employees of an IT company.
Our results indicate a rather average acceptability of generative tools, although the more useful the tool is perceived to be, the higher the intention seems to be.
Our analyses suggest that the frequency of use of generative tools is likely to be a key factor in understanding how employees perceive these tools in the context of their work.
arXiv Detail & Related papers (2023-04-18T15:35:43Z) - A Large-Scale Survey on the Usability of AI Programming Assistants:
Successes and Challenges [23.467373994306524]
In practice, developers do not accept AI programming assistants' initial suggestions at a high frequency.
To understand developers' practices while using these tools, we administered a survey to a large population of developers.
We found that developers are most motivated to use AI programming assistants because they help developers reduce key-strokes, finish programming tasks quickly, and recall syntax.
We also found the most important reasons why developers do not use these tools are because these tools do not output code that addresses certain functional or non-functional requirements.
arXiv Detail & Related papers (2023-03-30T03:21:53Z) - Generation Probabilities Are Not Enough: Uncertainty Highlighting in AI Code Completions [54.55334589363247]
We study whether conveying information about uncertainty enables programmers to more quickly and accurately produce code.
We find that highlighting tokens with the highest predicted likelihood of being edited leads to faster task completion and more targeted edits.
arXiv Detail & Related papers (2023-02-14T18:43:34Z) - AI Explainability 360: Impact and Design [120.95633114160688]
In 2019, we created AI Explainability 360 (Arya et al. 2020), an open source software toolkit featuring ten diverse and state-of-the-art explainability methods.
This paper examines the impact of the toolkit with several case studies, statistics, and community feedback.
The paper also describes the flexible design of the toolkit, examples of its use, and the significant educational material and documentation available to its users.
arXiv Detail & Related papers (2021-09-24T19:17:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.