Understanding User Mental Models in AI-Driven Code Completion Tools: Insights from an Elicitation Study
- URL: http://arxiv.org/abs/2502.02194v1
- Date: Tue, 04 Feb 2025 10:20:49 GMT
- Title: Understanding User Mental Models in AI-Driven Code Completion Tools: Insights from an Elicitation Study
- Authors: Giuseppe Desolda, Andrea Esposito, Francesco Greco, Cesare Tucci, Paolo Buono, Antonio Piccinno,
- Abstract summary: We conduct an elicitation study with 56 developers using focus groups to elicit their mental models when interacting with AI-powered code completion tools.
The study findings provide actionable insights for designing human-centered CCTs that align with user expectations, enhance satisfaction and productivity, and foster trust in AI-powered development tools.
We also develop ATHENA, a proof-of-concept CCT that dynamically adapts to developers' coding preferences and environments, ensuring seamless integration into diverse environments.
- Score: 5.534104886050636
- License:
- Abstract: Integrated Development Environments increasingly implement AI-powered code completion tools (CCTs), which promise to enhance developer efficiency, accuracy, and productivity. However, interaction challenges with CCTs persist, mainly due to mismatches between developers' mental models and the unpredictable behavior of AI-generated suggestions. This is an aspect underexplored in the literature. To address this gap, we conducted an elicitation study with 56 developers using focus groups, to elicit their mental models when interacting with CCTs. The study findings provide actionable insights for designing human-centered CCTs that align with user expectations, enhance satisfaction and productivity, and foster trust in AI-powered development tools. To demonstrate the feasibility of these guidelines, we also developed ATHENA, a proof-of-concept CCT that dynamically adapts to developers' coding preferences and environments, ensuring seamless integration into diverse
Related papers
- Enhancing Trust in Language Model-Based Code Optimization through RLHF: A Research Design [0.0]
This research aims to develop reliable, LM-powered methods for code optimization that effectively integrate human feedback.
This work aligns with the broader objectives of advancing cooperative and human-centric aspects of software engineering.
arXiv Detail & Related papers (2025-02-10T18:48:45Z) - How Developers Interact with AI: A Taxonomy of Human-AI Collaboration in Software Engineering [8.65285948382426]
We propose a taxonomy of interaction types between developers and AI tools, identifying eleven distinct interaction types.
Building on this taxonomy, we outline a research agenda focused on optimizing AI interactions, improving developer control, and addressing trust and usability challenges in AI-assisted development.
arXiv Detail & Related papers (2025-01-15T12:53:49Z) - Dear Diary: A randomized controlled trial of Generative AI coding tools in the workplace [2.5280615594444567]
Generative AI coding tools are relatively new, and their impact on developers extends beyond traditional coding metrics.
This study aims to illuminate developers' preexisting beliefs about generative AI tools, their self perceptions, and how regular use of these tools may alter these beliefs.
Our findings reveal that the introduction and sustained use of generative AI coding tools significantly increases developers' perceptions of these tools as both useful and enjoyable.
arXiv Detail & Related papers (2024-10-24T00:07:27Z) - The Design Space of in-IDE Human-AI Experience [6.05260196829912]
Key findings stress the need for AI systems that are more personalized, proactive, and reliable.
Our findings show that while Adopters appreciate advanced features and non-interruptive integration, Churners emphasize the need for improved reliability and privacy.
Non-Users, in contrast, focus on skill development and ethical concerns as barriers to adoption.
arXiv Detail & Related papers (2024-10-11T10:02:52Z) - Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - In-IDE Human-AI Experience in the Era of Large Language Models; A
Literature Review [2.6703221234079946]
The study of in-IDE Human-AI Experience is critical in understanding how these AI tools are transforming the software development process.
We conducted a literature review to study the current state of in-IDE Human-AI Experience research.
arXiv Detail & Related papers (2024-01-19T14:55:51Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - Federated Learning-Empowered AI-Generated Content in Wireless Networks [58.48381827268331]
Federated learning (FL) can be leveraged to improve learning efficiency and achieve privacy protection for AIGC.
We present FL-based techniques for empowering AIGC, and aim to enable users to generate diverse, personalized, and high-quality content.
arXiv Detail & Related papers (2023-07-14T04:13:11Z) - Guiding AI-Generated Digital Content with Wireless Perception [69.51950037942518]
We introduce an integration of wireless perception with AI-generated content (AIGC) to improve the quality of digital content production.
The framework employs a novel multi-scale perception technology to read user's posture, which is difficult to describe accurately in words, and transmits it to the AIGC model as skeleton images.
Since the production process imposes the user's posture as a constraint on the AIGC model, it makes the generated content more aligned with the user's requirements.
arXiv Detail & Related papers (2023-03-26T04:39:03Z) - Data-Driven and SE-assisted AI Model Signal-Awareness Enhancement and
Introspection [61.571331422347875]
We propose a data-driven approach to enhance models' signal-awareness.
We combine the SE concept of code complexity with the AI technique of curriculum learning.
We achieve up to 4.8x improvement in model signal awareness.
arXiv Detail & Related papers (2021-11-10T17:58:18Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.