Towards Decoding Developer Cognition in the Age of AI Assistants
- URL: http://arxiv.org/abs/2501.02684v1
- Date: Sun, 05 Jan 2025 23:25:21 GMT
- Title: Towards Decoding Developer Cognition in the Age of AI Assistants
- Authors: Ebtesam Al Haque, Chris Brown, Thomas D. LaToza, Brittany Johnson,
- Abstract summary: We propose a controlled observational study combining physiological measurements (EEG and eye tracking) with interaction data to examine developers' use of AI-assisted programming tools.
We will recruit professional developers to complete programming tasks both with and without AI assistance while measuring their cognitive load and task completion time.
- Score: 9.887133861477233
- License:
- Abstract: Background: The increasing adoption of AI assistants in programming has led to numerous studies exploring their benefits. While developers consistently report significant productivity gains from these tools, empirical measurements often show more modest improvements. While prior research has documented self-reported experiences with AI-assisted programming tools, little to no work has been done to understand their usage patterns and the actual cognitive load imposed in practice. Objective: In this exploratory study, we aim to investigate the role AI assistants play in developer productivity. Specifically, we are interested in how developers' expertise levels influence their AI usage patterns, and how these patterns impact their actual cognitive load and productivity during development tasks. We also seek to better understand how this relates to their perceived productivity. Method: We propose a controlled observational study combining physiological measurements (EEG and eye tracking) with interaction data to examine developers' use of AI-assisted programming tools. We will recruit professional developers to complete programming tasks both with and without AI assistance while measuring their cognitive load and task completion time. Through pre- and post-task questionnaires, we will collect data on perceived productivity and cognitive load using NASA-TLX.
Related papers
- Interactive Agents to Overcome Ambiguity in Software Engineering [61.40183840499932]
AI agents are increasingly being deployed to automate tasks, often based on ambiguous and underspecified user instructions.
Making unwarranted assumptions and failing to ask clarifying questions can lead to suboptimal outcomes.
We study the ability of LLM agents to handle ambiguous instructions in interactive code generation settings by evaluating proprietary and open-weight models on their performance.
arXiv Detail & Related papers (2025-02-18T17:12:26Z) - KBAlign: Efficient Self Adaptation on Specific Knowledge Bases [73.34893326181046]
Large language models (LLMs) usually rely on retrieval-augmented generation to exploit knowledge materials in an instant manner.
We propose KBAlign, an approach designed for efficient adaptation to downstream tasks involving knowledge bases.
Our method utilizes iterative training with self-annotated data such as Q&A pairs and revision suggestions, enabling the model to grasp the knowledge content efficiently.
arXiv Detail & Related papers (2024-11-22T08:21:03Z) - Dear Diary: A randomized controlled trial of Generative AI coding tools in the workplace [2.5280615594444567]
Generative AI coding tools are relatively new, and their impact on developers extends beyond traditional coding metrics.
This study aims to illuminate developers' preexisting beliefs about generative AI tools, their self perceptions, and how regular use of these tools may alter these beliefs.
Our findings reveal that the introduction and sustained use of generative AI coding tools significantly increases developers' perceptions of these tools as both useful and enjoyable.
arXiv Detail & Related papers (2024-10-24T00:07:27Z) - How much does AI impact development speed? An enterprise-based randomized controlled trial [8.759453531975668]
We estimate the impact of three AI features on the time developers spent on a complex, enterprise-grade task.
We also found an interesting effect whereby developers who spend more hours on code-related activities per day were faster with AI.
arXiv Detail & Related papers (2024-10-16T18:31:14Z) - Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - Information Seeking Using AI Assistants [9.887133861477233]
We conducted a mixed-method study to understand AI-assisted information seeking behavior of practitioners.
We found that developers are increasingly using AI tools to support their information seeking, citing increased efficiency as a key benefit.
Our efforts have implications for effective integration of AI tools into developer as information retrieval and learning aids.
arXiv Detail & Related papers (2024-08-07T18:27:13Z) - Using AI-Based Coding Assistants in Practice: State of Affairs, Perceptions, and Ways Forward [9.177785129949]
We aim to better understand how specifically developers are using AI assistants.
We carried out a large-scale survey aimed at how AI assistants are used.
arXiv Detail & Related papers (2024-06-11T23:10:43Z) - On Responsible Machine Learning Datasets with Fairness, Privacy, and Regulatory Norms [56.119374302685934]
There have been severe concerns over the trustworthiness of AI technologies.
Machine and deep learning algorithms depend heavily on the data used during their development.
We propose a framework to evaluate the datasets through a responsible rubric.
arXiv Detail & Related papers (2023-10-24T14:01:53Z) - LLM-based Interaction for Content Generation: A Case Study on the
Perception of Employees in an IT department [85.1523466539595]
This paper presents a questionnaire survey to identify the intention to use generative tools by employees of an IT company.
Our results indicate a rather average acceptability of generative tools, although the more useful the tool is perceived to be, the higher the intention seems to be.
Our analyses suggest that the frequency of use of generative tools is likely to be a key factor in understanding how employees perceive these tools in the context of their work.
arXiv Detail & Related papers (2023-04-18T15:35:43Z) - A Large-Scale Survey on the Usability of AI Programming Assistants:
Successes and Challenges [23.467373994306524]
In practice, developers do not accept AI programming assistants' initial suggestions at a high frequency.
To understand developers' practices while using these tools, we administered a survey to a large population of developers.
We found that developers are most motivated to use AI programming assistants because they help developers reduce key-strokes, finish programming tasks quickly, and recall syntax.
We also found the most important reasons why developers do not use these tools are because these tools do not output code that addresses certain functional or non-functional requirements.
arXiv Detail & Related papers (2023-03-30T03:21:53Z) - Lifelong Learning Metrics [63.8376359764052]
The DARPA Lifelong Learning Machines (L2M) program seeks to yield advances in artificial intelligence (AI) systems.
This document outlines a formalism for constructing and characterizing the performance of agents performing lifelong learning scenarios.
arXiv Detail & Related papers (2022-01-20T16:29:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.