A Large-Scale Survey on the Usability of AI Programming Assistants:
Successes and Challenges
- URL: http://arxiv.org/abs/2303.17125v2
- Date: Sun, 17 Sep 2023 04:36:05 GMT
- Title: A Large-Scale Survey on the Usability of AI Programming Assistants:
Successes and Challenges
- Authors: Jenny T. Liang, Chenyang Yang, Brad A. Myers
- Abstract summary: In practice, developers do not accept AI programming assistants' initial suggestions at a high frequency.
To understand developers' practices while using these tools, we administered a survey to a large population of developers.
We found that developers are most motivated to use AI programming assistants because they help developers reduce key-strokes, finish programming tasks quickly, and recall syntax.
We also found the most important reasons why developers do not use these tools are because these tools do not output code that addresses certain functional or non-functional requirements.
- Score: 23.467373994306524
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The software engineering community recently has witnessed widespread
deployment of AI programming assistants, such as GitHub Copilot. However, in
practice, developers do not accept AI programming assistants' initial
suggestions at a high frequency. This leaves a number of open questions related
to the usability of these tools. To understand developers' practices while
using these tools and the important usability challenges they face, we
administered a survey to a large population of developers and received
responses from a diverse set of 410 developers. Through a mix of qualitative
and quantitative analyses, we found that developers are most motivated to use
AI programming assistants because they help developers reduce key-strokes,
finish programming tasks quickly, and recall syntax, but resonate less with
using them to help brainstorm potential solutions. We also found the most
important reasons why developers do not use these tools are because these tools
do not output code that addresses certain functional or non-functional
requirements and because developers have trouble controlling the tool to
generate the desired output. Our findings have implications for both creators
and users of AI programming assistants, such as designing minimal cognitive
effort interactions with these tools to reduce distractions for users while
they are programming.
Related papers
- AI Tool Use and Adoption in Software Development by Individuals and Organizations: A Grounded Theory Study [6.722524226580543]
We conducted a mixed methods study involving interviews with 26 industry practitioners and 395 survey respondents.
We identified 2 individual motives, 4 individual challenges, 3 organizational motives, and 3 organizational challenges, and 3 interleaved relationships.
The 3 interleaved relationships act in a push-pull manner where motives pull practitioners to increase the use of AI tools and challenges push practitioners away from using AI tools.
arXiv Detail & Related papers (2024-06-25T07:18:56Z) - Agent-Driven Automatic Software Improvement [55.2480439325792]
This research proposal aims to explore innovative solutions by focusing on the deployment of agents powered by Large Language Models (LLMs)
The iterative nature of agents, which allows for continuous learning and adaptation, can help surpass common challenges in code generation.
We aim to use the iterative feedback in these systems to further fine-tune the LLMs underlying the agents, becoming better aligned to the task of automated software improvement.
arXiv Detail & Related papers (2024-06-24T15:45:22Z) - Using AI-Based Coding Assistants in Practice: State of Affairs, Perceptions, and Ways Forward [9.177785129949]
We carried out a large-scale survey aimed at how AI assistants are used.
We collected opinions of 481 programmers on five broad activities.
Our results show that usage of AI assistants varies depending on activity and stage.
arXiv Detail & Related papers (2024-06-11T23:10:43Z) - Code Compass: A Study on the Challenges of Navigating Unfamiliar Codebases [2.808331566391181]
We propose a novel tool, Code, to address these issues.
Our study highlights a significant gap in current tools and methodologies.
Our formative study demonstrates how effectively the tool reduces the time developers spend navigating documentation.
arXiv Detail & Related papers (2024-05-10T06:58:31Z) - Developer Experiences with a Contextualized AI Coding Assistant:
Usability, Expectations, and Outcomes [11.520721038793285]
This study focuses on the initial experiences of 62 participants who used a contextualized coding AI assistant -- named StackSpot AI -- in a controlled setting.
Assistants' use resulted in significant time savings, easier access to documentation, and the generation of accurate codes for internal APIs.
challenges associated with the knowledge sources necessary to make the coding assistant access more contextual information as well as variable responses and limitations in handling complex codes were observed.
arXiv Detail & Related papers (2023-11-30T10:52:28Z) - AI for Low-Code for AI [8.379047663193422]
LowCoder is the first low-code tool for developing AI pipelines that supports both a visual programming interface and an AI-powered natural language interface.
We task 20 developers with varying levels of AI expertise with implementing four ML pipelines using LowCoder.
We find that LowCoder is especially useful for (i) Discoverability: using LowCoder_NL, participants discovered new operators in 75% of the tasks.
arXiv Detail & Related papers (2023-05-31T16:44:03Z) - The GitHub Development Workflow Automation Ecosystems [47.818229204130596]
Large-scale software development has become a highly collaborative endeavour.
This chapter explores the ecosystems of development bots and GitHub Actions.
It provides an extensive survey of the state-of-the-art in this domain.
arXiv Detail & Related papers (2023-05-08T15:24:23Z) - LLM-based Interaction for Content Generation: A Case Study on the
Perception of Employees in an IT department [85.1523466539595]
This paper presents a questionnaire survey to identify the intention to use generative tools by employees of an IT company.
Our results indicate a rather average acceptability of generative tools, although the more useful the tool is perceived to be, the higher the intention seems to be.
Our analyses suggest that the frequency of use of generative tools is likely to be a key factor in understanding how employees perceive these tools in the context of their work.
arXiv Detail & Related papers (2023-04-18T15:35:43Z) - Tool Learning with Foundation Models [114.2581831746077]
With the advent of foundation models, AI systems have the potential to be equally adept in tool use as humans.
Despite its immense potential, there is still a lack of a comprehensive understanding of key challenges, opportunities, and future endeavors in this field.
arXiv Detail & Related papers (2023-04-17T15:16:10Z) - Competition-Level Code Generation with AlphaCode [74.87216298566942]
We introduce AlphaCode, a system for code generation that can create novel solutions to problems that require deeper reasoning.
In simulated evaluations on recent programming competitions on the Codeforces platform, AlphaCode achieved on average a ranking of top 54.3%.
arXiv Detail & Related papers (2022-02-08T23:16:31Z) - AI Explainability 360: Impact and Design [120.95633114160688]
In 2019, we created AI Explainability 360 (Arya et al. 2020), an open source software toolkit featuring ten diverse and state-of-the-art explainability methods.
This paper examines the impact of the toolkit with several case studies, statistics, and community feedback.
The paper also describes the flexible design of the toolkit, examples of its use, and the significant educational material and documentation available to its users.
arXiv Detail & Related papers (2021-09-24T19:17:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.