Collaborative Document Editing with Multiple Users and AI Agents
- URL: http://arxiv.org/abs/2509.11826v1
- Date: Mon, 15 Sep 2025 12:11:59 GMT
- Title: Collaborative Document Editing with Multiple Users and AI Agents
- Authors: Florian Lehmann, Krystsina Shauchenka, Daniel Buschek,
- Abstract summary: We propose integrating AI agents directly into collaborative writing environments.<n>Our prototype makes AI use transparent and customisable through two new shared objects: agent profiles and tasks.
- Score: 19.340967112148665
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current AI writing support tools are largely designed for individuals, complicating collaboration when co-writers must leave the shared workspace to use AI and then communicate and reintegrate results. We propose integrating AI agents directly into collaborative writing environments. Our prototype makes AI use transparent and customisable through two new shared objects: agent profiles and tasks. Agent responses appear in the familiar comment feature. In a user study (N=30), 14 teams worked on writing projects during one week. Interaction logs and interviews show that teams incorporated agents into existing norms of authorship, control, and coordination, rather than treating them as team members. Agent profiles were viewed as personal territory, while created agents and outputs became shared resources. We discuss implications for team-based AI interaction, highlighting opportunities and boundaries for treating AI as a shared resource in collaborative work.
Related papers
- DIG to Heal: Scaling General-purpose Agent Collaboration via Explainable Dynamic Decision Paths [29.11412449913759]
We study multi-agent systems composed of general-purpose large language model (LLM) agents that operate without predefined roles, control flow, or communication constraints.<n>We introduce the Dynamic Interaction Graph (DIG), which captures emergent collaboration as a time-evolving causal network of agent activations and interactions.
arXiv Detail & Related papers (2026-02-27T20:59:37Z) - CooperBench: Why Coding Agents Cannot be Your Teammates Yet [44.06715229961526]
CooperBench is a benchmark of over 600 collaborative coding tasks across 12 libraries in 4 programming languages.<n>Agents achieve on average 30% lower success rates when working together compared to performing both tasks individually.<n>Our analysis reveals three key issues: (1) communication channels become jammed with vague, ill-timed, and inaccurate messages; (2) even with effective communication, agents deviate from their commitments; and (3) agents often hold incorrect expectations about others' plans and communication.
arXiv Detail & Related papers (2026-01-19T18:48:37Z) - Code with Me or for Me? How Increasing AI Automation Transforms Developer Workflows [60.04362496037186]
We present the first controlled study of developer interactions with coding agents.<n>We evaluate two leading copilot and agentic coding assistants.<n>Our results show agents can assist developers in ways that surpass copilots.
arXiv Detail & Related papers (2025-07-10T20:12:54Z) - Prototypical Human-AI Collaboration Behaviors from LLM-Assisted Writing in the Wild [10.23533525266164]
Large language models (LLMs) are used in complex writing to steer generations to better fit their needs.<n>We conduct a large-scale analysis of this collaborative behavior for users engaged in writing tasks in the wild.<n>We identify prototypical behaviors in how users interact with LLMs in prompts following their original request.
arXiv Detail & Related papers (2025-05-21T21:13:01Z) - Collaborative Gym: A Framework for Enabling and Evaluating Human-Agent Collaboration [50.657070334404835]
Collaborative Gym is a framework enabling asynchronous, tripartite interaction among agents, humans, and task environments.<n>We instantiate Co-Gym with three representative tasks in both simulated and real-world conditions.<n>Our findings reveal that collaborative agents consistently outperform their fully autonomous counterparts in task performance.
arXiv Detail & Related papers (2024-12-20T09:21:15Z) - TheAgentCompany: Benchmarking LLM Agents on Consequential Real World Tasks [55.03911355902567]
We introduce TheAgentCompany, a benchmark for evaluating AI agents that interact with the world in similar ways to those of a digital worker.<n>We find that the most competitive agent can complete 30% of tasks autonomously.<n>This paints a nuanced picture on task automation with simulating LM agents in a setting a real workplace.
arXiv Detail & Related papers (2024-12-18T18:55:40Z) - Cocoa: Co-Planning and Co-Execution with AI Agents [31.695129948650287]
We present Cocoa, a system that introduces a novel design pattern -- interactive plans -- for collaborating with an AI agent.<n>Cocoa builds on interaction designs from computational notebooks and document editors to support flexible delegation of agency.<n>Using scientific research as a sample domain, our lab and field deployment studies found that Cocoa improved agent steerability without sacrificing ease-of-use.
arXiv Detail & Related papers (2024-12-14T23:59:42Z) - ChatCollab: Exploring Collaboration Between Humans and AI Agents in Software Teams [1.3967206132709542]
ChatCollab's novel architecture allows agents - human or AI - to join collaborations in any role.<n>Using software engineering as a case study, we find that our AI agents successfully identify their roles and responsibilities.<n>In relation to three prior multi-agent AI systems for software development, we find ChatCollab AI agents produce comparable or better software in an interactive game development task.
arXiv Detail & Related papers (2024-12-02T21:56:46Z) - Mutual Theory of Mind in Human-AI Collaboration: An Empirical Study with LLM-driven AI Agents in a Real-time Shared Workspace Task [56.92961847155029]
Theory of Mind (ToM) significantly impacts human collaboration and communication as a crucial capability to understand others.
Mutual Theory of Mind (MToM) arises when AI agents with ToM capability collaborate with humans.
We find that the agent's ToM capability does not significantly impact team performance but enhances human understanding of the agent.
arXiv Detail & Related papers (2024-09-13T13:19:48Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - CAMEL: Communicative Agents for "Mind" Exploration of Large Language
Model Society [58.04479313658851]
This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents.
We propose a novel communicative agent framework named role-playing.
Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems.
arXiv Detail & Related papers (2023-03-31T01:09:00Z) - Learning to Cooperate with Unseen Agent via Meta-Reinforcement Learning [4.060731229044571]
Ad hoc teamwork problem describes situations where an agent has to cooperate with previously unseen agents to achieve a common goal.
One could implement cooperative skills into an agent by using domain knowledge to design the agent's behavior.
We apply meta-reinforcement learning (meta-RL) formulation in the context of the ad hoc teamwork problem.
arXiv Detail & Related papers (2021-11-05T12:01:28Z) - How AI Developers Overcome Communication Challenges in a
Multidisciplinary Team: A Case Study [11.633108017016985]
The development of AI applications is a multidisciplinary effort, involving multiple roles collaborating with the AI developers.
During these collaborations, there is a knowledge mismatch between AI developers, who are skilled in data science, and external stakeholders who are typically not.
This difference leads to communication gaps, and the onus falls on AI developers to explain data science concepts to their collaborators.
arXiv Detail & Related papers (2021-01-13T19:33:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.