Why Human Guidance Matters in Collaborative Vibe Coding
- URL: http://arxiv.org/abs/2602.10473v1
- Date: Wed, 11 Feb 2026 03:24:57 GMT
- Title: Why Human Guidance Matters in Collaborative Vibe Coding
- Authors: Haoyu Hu, Raja Marjieh, Katherine M Collins, Chenyi Li, Thomas L. Griffiths, Ilia Sucholutsky, Nori Jacoby,
- Abstract summary: We study the impact of vibe coding on productivity and collaboration.<n>We show that people provide uniquely effective high-level instructions for vibe coding.<n>We also demonstrate that hybrid systems perform best when humans retain directional control.
- Score: 24.04414458645034
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Writing code has been one of the most transformative ways for human societies to translate abstract ideas into tangible technologies. Modern AI is transforming this process by enabling experts and non-experts alike to generate code without actually writing code, but instead, through natural language instructions, or "vibe coding". While increasingly popular, the cumulative impact of vibe coding on productivity and collaboration, as well as the role of humans in this process, remains unclear. Here, we introduce a controlled experimental framework for studying collaborative vibe coding and use it to compare human-led, AI-led, and hybrid groups. Across 16 experiments involving 604 human participants, we show that people provide uniquely effective high-level instructions for vibe coding across iterations, whereas AI-provided instructions often result in performance collapse. We further demonstrate that hybrid systems perform best when humans retain directional control (providing the instructions), while evaluation is delegated to AI.
Related papers
- Code for Machines, Not Just Humans: Quantifying AI-Friendliness with Code Health Metrics [6.108440460022983]
We investigate the concept of AI-friendly code' via a dataset of 5,000 Python files from competitive programming.<n>Our findings confirm that human-friendly code is also more compatible with AI tooling.<n>These results suggest that organizations can use CodeHealth to guide where AI interventions are lower risk and where additional human oversight is warranted.
arXiv Detail & Related papers (2026-01-05T15:23:55Z) - "Can you feel the vibes?": An exploration of novice programmer engagement with vibe coding [42.82674998306379]
"vibe coding" refers to creating software via natural language prompts rather than direct code authorship.<n>This paper reports on a one-day educational hackathon investigating how novice programmers and mixed-experience teams engage with vibe coding.
arXiv Detail & Related papers (2025-12-02T13:32:23Z) - Vibe Coding: Is Human Nature the Ghost in the Machine? [0.0]
We analyzed three "vibe coding" sessions between a human product lead and an AI software engineer.<n>We investigated similarities and differences in team dynamics, communication patterns, and development outcomes.<n>To our surprise, later conversations revealed that the AI agent had systematically misrepresented its accomplishments.
arXiv Detail & Related papers (2025-08-28T15:48:48Z) - Code with Me or for Me? How Increasing AI Automation Transforms Developer Workflows [60.04362496037186]
We present the first controlled study of developer interactions with coding agents.<n>We evaluate two leading copilot and agentic coding assistants.<n>Our results show agents can assist developers in ways that surpass copilots.
arXiv Detail & Related papers (2025-07-10T20:12:54Z) - From Teacher to Colleague: How Coding Experience Shapes Developer Perceptions of AI Tools [0.0]
AI-assisted development tools promise productivity gains and improved code quality, yet their adoption among developers remains inconsistent.<n>We analyze survey data from 3380 developers to examine how coding experience relates to AI awareness, adoption, and the roles developers assign to AI in their workflow.
arXiv Detail & Related papers (2025-04-08T08:58:06Z) - BO-Muse: A human expert and AI teaming framework for accelerated
experimental design [58.61002520273518]
Our algorithm lets the human expert take the lead in the experimental process.
We show that our algorithm converges sub-linearly, at a rate faster than the AI or human alone.
arXiv Detail & Related papers (2023-03-03T02:56:05Z) - "No, to the Right" -- Online Language Corrections for Robotic
Manipulation via Shared Autonomy [70.45420918526926]
We present LILAC, a framework for incorporating and adapting to natural language corrections online during execution.
Instead of discrete turn-taking between a human and robot, LILAC splits agency between the human and robot.
We show that our corrections-aware approach obtains higher task completion rates, and is subjectively preferred by users.
arXiv Detail & Related papers (2023-01-06T15:03:27Z) - Human Decision Makings on Curriculum Reinforcement Learning with
Difficulty Adjustment [52.07473934146584]
We guide the curriculum reinforcement learning results towards a preferred performance level that is neither too hard nor too easy via learning from the human decision process.
Our system is highly parallelizable, making it possible for a human to train large-scale reinforcement learning applications.
It shows reinforcement learning performance can successfully adjust in sync with the human desired difficulty level.
arXiv Detail & Related papers (2022-08-04T23:53:51Z) - The MineRL BASALT Competition on Learning from Human Feedback [58.17897225617566]
The MineRL BASALT competition aims to spur forward research on this important class of techniques.
We design a suite of four tasks in Minecraft for which we expect it will be hard to write down hardcoded reward functions.
We provide a dataset of human demonstrations on each of the four tasks, as well as an imitation learning baseline.
arXiv Detail & Related papers (2021-07-05T12:18:17Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.