Developer Interaction Patterns with Proactive AI: A Five-Day Field Study
- URL: http://arxiv.org/abs/2601.10253v1
- Date: Thu, 15 Jan 2026 10:20:57 GMT
- Title: Developer Interaction Patterns with Proactive AI: A Five-Day Field Study
- Authors: Nadine Kuo, Agnia Sergeyuk, Valerie Chen, Maliheh Izadi,
- Abstract summary: We present a field study of proactive AI assistance in professional developer.<n>We examined AI interventions across 5,732 interaction points to understand how proactive suggestions are received.<n>Our findings reveal systematic patterns in human receptivity to proactive suggestions.
- Score: 7.26202905367366
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current in-IDE AI coding tools typically rely on time-consuming manual prompting and context management, whereas proactive alternatives that anticipate developer needs without explicit invocation remain underexplored. Understanding when humans are receptive to such proactive AI assistance during their daily work remains an open question in human-AI interaction research. We address this gap through a field study of proactive AI assistance in professional developer workflows. We present a five-day in-the-wild study with 15 developers who interacted with a proactive feature of an AI assistant integrated into a production-grade IDE that offers code quality suggestions based on in-IDE developer activity. We examined 229 AI interventions across 5,732 interaction points to understand how proactive suggestions are received across workflow stages, how developers experience them, and their perceived impact. Our findings reveal systematic patterns in human receptivity to proactive suggestions: interventions at workflow boundaries (e.g., post-commit) achieved 52% engagement rates, while mid-task interventions (e.g., on declined edit) were dismissed 62% of the time. Notably, well-timed proactive suggestions required significantly less interpretation time than reactive suggestions (45.4s versus 101.4s, W = 109.00, r = 0.533, p = 0.0016), indicating enhanced cognitive alignment. This study provides actionable implications for designing proactive coding assistants, including how to time interventions, align them with developer context, and strike a balance between AI agency and user control in production IDEs.
Related papers
- Modeling Distinct Human Interaction in Web Agents [59.600507469754575]
We introduce the task of modeling human intervention to support collaborative web task execution.<n>We identify four distinct patterns of user interaction with agents -- hands-off supervision, hands-on oversight, collaborative task-solving, and full user takeover.<n>We deploy these intervention-aware models in live web navigation agents and evaluate them in a user study, finding a 26.5% increase in user-rated agent usefulness.
arXiv Detail & Related papers (2026-02-19T18:11:28Z) - AgentIF-OneDay: A Task-level Instruction-Following Benchmark for General AI Agents in Daily Scenarios [49.90735676070039]
The capacity of AI agents to effectively handle tasks of increasing duration and complexity continues to grow.<n>We argue that current evaluations prioritize increasing task difficulty without sufficiently addressing the diversity of agentic tasks.<n>We propose AgentIF-OneDay, aimed at determining whether general users can utilize natural language instructions and AI agents to complete a diverse array of daily tasks.
arXiv Detail & Related papers (2026-01-28T13:49:18Z) - Code with Me or for Me? How Increasing AI Automation Transforms Developer Workflows [60.04362496037186]
We present the first controlled study of developer interactions with coding agents.<n>We evaluate two leading copilot and agentic coding assistants.<n>Our results show agents can assist developers in ways that surpass copilots.
arXiv Detail & Related papers (2025-07-10T20:12:54Z) - Echoes of AI: Investigating the Downstream Effects of AI Assistants on Software Maintainability [5.677464428950146]
This study investigates whether co-development with AI assistants affects software maintainability.<n> AI-assisted development in Phase 1 led to a modest speedup in subsequent evolution.<n>For habitual AI users, the mean speedup was 55.9%.
arXiv Detail & Related papers (2025-07-01T14:24:37Z) - Assistance or Disruption? Exploring and Evaluating the Design and Trade-offs of Proactive AI Programming Support [36.082282294551405]
We introduce and evaluate Codellaborator, a design probe agent that initiates programming assistance based on editor activities and task context.<n>We find that proactive agents increase efficiency compared to prompt-only paradigm, but also incur workflow disruptions.
arXiv Detail & Related papers (2025-02-25T21:37:25Z) - Interactive Agents to Overcome Ambiguity in Software Engineering [61.40183840499932]
AI agents are increasingly being deployed to automate tasks, often based on ambiguous and underspecified user instructions.<n>Making unwarranted assumptions and failing to ask clarifying questions can lead to suboptimal outcomes.<n>We study the ability of LLM agents to handle ambiguous instructions in interactive code generation settings by evaluating proprietary and open-weight models on their performance.
arXiv Detail & Related papers (2025-02-18T17:12:26Z) - Towards Decoding Developer Cognition in the Age of AI Assistants [9.887133861477233]
We propose a controlled observational study combining physiological measurements (EEG and eye tracking) with interaction data to examine developers' use of AI-assisted programming tools.<n>We will recruit professional developers to complete programming tasks both with and without AI assistance while measuring their cognitive load and task completion time.
arXiv Detail & Related papers (2025-01-05T23:25:21Z) - How much does AI impact development speed? An enterprise-based randomized controlled trial [8.759453531975668]
We estimate the impact of three AI features on the time developers spent on a complex, enterprise-grade task.
We also found an interesting effect whereby developers who spend more hours on code-related activities per day were faster with AI.
arXiv Detail & Related papers (2024-10-16T18:31:14Z) - Proactive Agent: Shifting LLM Agents from Reactive Responses to Active Assistance [95.03771007780976]
We tackle the challenge of developing proactive agents capable of anticipating and initiating tasks without explicit human instructions.<n>First, we collect real-world human activities to generate proactive task predictions.<n>These predictions are labeled by human annotators as either accepted or rejected.<n>The labeled data is used to train a reward model that simulates human judgment.
arXiv Detail & Related papers (2024-10-16T08:24:09Z) - Bridging Developer Needs and Feasible Features for AI Assistants in IDEs [6.05260196829912]
We interviewed 35 professional developers to uncover unmet needs and expectations.<n>Our analysis revealed five key areas: Technology Improvement, Interaction, and Alignment, as well as Simplifying Skill Building, and Programming Tasks.<n>The results demonstrate a strong alignment between the developers' needs and the practitioners' judgment for features focused on implementation and context awareness.
arXiv Detail & Related papers (2024-10-11T10:02:52Z) - When to Ask for Help: Proactive Interventions in Autonomous
Reinforcement Learning [57.53138994155612]
A long-term goal of reinforcement learning is to design agents that can autonomously interact and learn in the world.
A critical challenge is the presence of irreversible states which require external assistance to recover from, such as when a robot arm has pushed an object off of a table.
We propose an algorithm that efficiently learns to detect and avoid states that are irreversible, and proactively asks for help in case the agent does enter them.
arXiv Detail & Related papers (2022-10-19T17:57:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.