Are We All Using Agents the Same Way? An Empirical Study of Core and Peripheral Developers Use of Coding Agents
- URL: http://arxiv.org/abs/2601.20106v1
- Date: Tue, 27 Jan 2026 22:50:01 GMT
- Title: Are We All Using Agents the Same Way? An Empirical Study of Core and Peripheral Developers Use of Coding Agents
- Authors: Shamse Tasnim Cynthia, Joy Krishan Das, Banani Roy,
- Abstract summary: We study how core and peripheral developers use, review, modify, and verify agent-generated contributions prior to acceptance.<n>A subset of peripheral developers use agents more often, delegating tasks evenly across bug fixing, feature addition, documentation, and testing.<n>In contrast, core developers focus more on documentation and testing, yet their agentic PRs are frequently merged into the main/master branch.
- Score: 4.744786007044749
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous AI agents are transforming software development and redefining how developers collaborate with AI. Prior research shows that the adoption and use of AI-powered tools differ between core and peripheral developers. However, it remains unclear how this dynamic unfolds in the emerging era of autonomous coding agents. In this paper, we present the first empirical study of 9,427 agentic PRs, examining how core and peripheral developers use, review, modify, and verify agent-generated contributions prior to acceptance. Through a mix of qualitative and quantitative analysis, we make four key contributions. First, a subset of peripheral developers use agents more often, delegating tasks evenly across bug fixing, feature addition, documentation, and testing. In contrast, core developers focus more on documentation and testing, yet their agentic PRs are frequently merged into the main/master branch. Second, core developers engage slightly more in review discussions than peripheral developers, and both groups focus on evolvability issues. Third, agentic PRs are less likely to be modified, but when they are, both groups commonly perform refactoring. Finally, peripheral developers are more likely to merge without running CI checks, whereas core developers more consistently require passing verification before acceptance. Our analysis offers a comprehensive view of how developer experience shapes integration offer insights for both peripheral and core developers on how to effectively collaborate with coding agents.
Related papers
- Where Do AI Coding Agents Fail? An Empirical Study of Failed Agentic Pull Requests in GitHub [5.808464460707249]
We conduct a large-scale study of 33k agent-authored PRs made by five coding agents across GitHub.<n>We first quantitatively characterize merged and not-merged PRs along four broad dimensions.<n>Not-merged PRs tend to involve larger code changes, touch more files, and often do not pass the project's CI/CD pipeline validation.
arXiv Detail & Related papers (2026-01-21T17:12:46Z) - On Autopilot? An Empirical Study of Human-AI Teaming and Review Practices in Open Source [11.412808537439973]
We investigated project-level guidelines and developers' interactions with AI-assisted pull requests (PRs)<n>We found that over 67.5% of AI-co-authored PRs originate from contributors without prior code ownership.<n>In contrast to human-created PRs where non-owner developers receive the most feedback, AI-co-authored PRs from non-owners receive the least.
arXiv Detail & Related papers (2026-01-20T09:09:53Z) - AI IDEs or Autonomous Agents? Measuring the Impact of Coding Agents on Software Development [12.50615284537175]
Large language model (LLM) based coding agents increasingly act as autonomous contributors that generate and merge pull requests.<n>We present a longitudinal causal study of agent adoption in open-source repositories using staggered difference-in-differences with matched controls.
arXiv Detail & Related papers (2026-01-20T04:51:56Z) - From Correctness to Collaboration: Toward a Human-Centered Framework for Evaluating AI Agent Behavior in Software Engineering [7.402388519535592]
Current benchmarks, focused on code correctness, fail to capture the nuanced, interactive behaviors essential for successful human-AI partnership.<n>We present a foundational taxonomy of desirable agent behaviors for enterprise software engineering.<n>We also introduce the Context-Adaptive Behavior (CAB) Framework.
arXiv Detail & Related papers (2025-12-29T20:18:57Z) - An Empirical Study of Agent Developer Practices in AI Agent Frameworks [59.862193600499914]
The rise of large language models (LLMs) has sparked a surge of interest in agents, leading to the rapid growth of agent frameworks.<n>Despite widespread use of agent frameworks, their practical applications and how they influence the agent development process remain underexplored.<n>More than 80% of developers report difficulties in identifying the frameworks that best meet their specific development requirements.
arXiv Detail & Related papers (2025-12-01T17:52:15Z) - Agent0: Unleashing Self-Evolving Agents from Zero Data via Tool-Integrated Reasoning [84.70211451226835]
Large Language Model (LLM) Agents are constrained by a dependency on human-curated data.<n>We introduce Agent0, a fully autonomous framework that evolves high-performing agents without external data.<n>Agent0 substantially boosts reasoning capabilities, improving the Qwen3-8B-Base model by 18% on mathematical reasoning and 24% on general reasoning benchmarks.
arXiv Detail & Related papers (2025-11-20T05:01:57Z) - Holistic Agent Leaderboard: The Missing Infrastructure for AI Agent Evaluation [87.47155146067962]
We provide a standardized evaluation harness that orchestrates parallel evaluations across hundreds of tasks.<n>We conduct three-dimensional analysis spanning models, scaffolds, and benchmarks.<n>Our analysis reveals surprising insights, such as higher reasoning effort reducing accuracy in the majority of runs.
arXiv Detail & Related papers (2025-10-13T22:22:28Z) - Cognitive Kernel-Pro: A Framework for Deep Research Agents and Agent Foundation Models Training [67.895981259683]
General AI Agents are increasingly recognized as foundational frameworks for the next generation of artificial intelligence.<n>Current agent systems are either closed-source or heavily reliant on a variety of paid APIs and proprietary tools.<n>We present Cognitive Kernel-Pro, a fully open-source and (to the maximum extent) free multi-module agent framework.
arXiv Detail & Related papers (2025-08-01T08:11:31Z) - Code with Me or for Me? How Increasing AI Automation Transforms Developer Workflows [60.04362496037186]
We present the first controlled study of developer interactions with coding agents.<n>We evaluate two leading copilot and agentic coding assistants.<n>Our results show agents can assist developers in ways that surpass copilots.
arXiv Detail & Related papers (2025-07-10T20:12:54Z) - From Reproduction to Replication: Evaluating Research Agents with Progressive Code Masking [48.90371827091671]
AutoExperiment is a benchmark that evaluates AI agents' ability to implement and run machine learning experiments.<n>We evaluate state-of-the-art agents and find that performance degrades rapidly as $n$ increases.<n>Our findings highlight critical challenges in long-horizon code generation, context retrieval, and autonomous experiment execution.
arXiv Detail & Related papers (2025-06-24T15:39:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.