A Task-Interdependency Model of Complex Collaboration Towards
Human-Centered Crowd Work
- URL: http://arxiv.org/abs/2309.00160v1
- Date: Thu, 31 Aug 2023 22:37:47 GMT
- Title: A Task-Interdependency Model of Complex Collaboration Towards
Human-Centered Crowd Work
- Authors: David T. Lee and Christos A. Makridis
- Abstract summary: We present a model centered on interdependencies, a phenomenon well understood to be at the core of collaboration.
We use it to explain challenges to scaling complex collaborative work, underscore the importance of expert workers, and explore the relationship between coordination intensity and occupational wages.
- Score: 0.5439020425818999
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Models of crowdsourcing and human computation often assume that individuals
independently carry out small, modular tasks. However, while these models have
successfully shown how crowds can accomplish significant objectives, they can
inadvertently advance a less than human view of crowd workers and fail to
capture the unique human capacity for complex collaborative work. We present a
model centered on interdependencies -- a phenomenon well understood to be at
the core of collaboration -- that allows one to formally reason about diverse
challenges to complex collaboration. Our model represents tasks as an
interdependent collection of subtasks, formalized as a task graph. We use it to
explain challenges to scaling complex collaborative work, underscore the
importance of expert workers, reveal critical factors for learning on the job,
and explore the relationship between coordination intensity and occupational
wages. Using data from O*NET and the Bureau of Labor Statistics, we introduce
an index of occupational coordination intensity to validate our theoretical
predictions. We present preliminary evidence that occupations with greater
coordination intensity are less exposed to displacement by AI, and discuss
opportunities for models that emphasize the collaborative capacities of human
workers, bridge models of crowd work and traditional work, and promote AI in
roles augmenting human collaboration.
Related papers
- Cross-Task Experiential Learning on LLM-based Multi-Agent Collaboration [63.90193684394165]
We introduce multi-agent cross-task experiential learning (MAEL), a novel framework that endows LLM-driven agents with explicit cross-task learning and experience accumulation.<n>During the experiential learning phase, we quantify the quality for each step in the task-solving workflow and store the resulting rewards.<n>During inference, agents retrieve high-reward, task-relevant experiences as few-shot examples to enhance the effectiveness of each reasoning step.
arXiv Detail & Related papers (2025-05-29T07:24:37Z) - Partner Modelling Emerges in Recurrent Agents (But Only When It Matters) [4.845103288370202]
We train simple model-free RNN agents to collaborate with a population of diverse partners.<n>We find structured partner modelling emerges when agents can influence partner behaviour by controlling task allocation.<n>Our results show that partner modelling can arise spontaneously in model-free agents -- but only under environmental conditions that impose the right kind of social pressure.
arXiv Detail & Related papers (2025-05-22T22:24:12Z) - Cross-environment Cooperation Enables Zero-shot Multi-agent Coordination [37.90912492084769]
We study how reinforcement learning on a distribution of environments with a single partner enables learning general cooperative skills.
We introduce two Jax-based, procedural generators that create billions of solvable coordination challenges.
Our findings suggest that learning to collaborate across many unique scenarios encourages agents to develop general norms.
arXiv Detail & Related papers (2025-04-17T07:41:25Z) - Algorithmic Prompt Generation for Diverse Human-like Teaming and Communication with Large Language Models [14.45823275027527]
Quality Diversity (QD) optimization has been shown to be capable of generating diverse Reinforcement Learning (RL) agent behavior.
We first show, through a human-subjects experiment, that humans exhibit diverse coordination and communication behavior in this domain.
We then show that our approach can effectively replicate trends from human teaming data and also capture behaviors that are not easily observed.
arXiv Detail & Related papers (2025-04-04T23:09:40Z) - Collaborative Gym: A Framework for Enabling and Evaluating Human-Agent Collaboration [51.452664740963066]
Collaborative Gym is a framework enabling asynchronous, tripartite interaction among agents, humans, and task environments.
We instantiate Co-Gym with three representative tasks in both simulated and real-world conditions.
Our findings reveal that collaborative agents consistently outperform their fully autonomous counterparts in task performance.
arXiv Detail & Related papers (2024-12-20T09:21:15Z) - Multi-agent cooperation through learning-aware policy gradients [53.63948041506278]
Self-interested individuals often fail to cooperate, posing a fundamental challenge for multi-agent learning.
We present the first unbiased, higher-derivative-free policy gradient algorithm for learning-aware reinforcement learning.
We derive from the iterated prisoner's dilemma a novel explanation for how and when cooperation arises among self-interested learning-aware agents.
arXiv Detail & Related papers (2024-10-24T10:48:42Z) - MultiAgent Collaboration Attack: Investigating Adversarial Attacks in Large Language Model Collaborations via Debate [24.92465108034783]
Large Language Models (LLMs) have shown exceptional results on current benchmarks when working individually.
The advancement in their capabilities, along with a reduction in parameter size and inference times, has facilitated the use of these models as agents.
We evaluate the behavior of a network of models collaborating through debate under the influence of an adversary.
arXiv Detail & Related papers (2024-06-20T20:09:37Z) - Scaling Large-Language-Model-based Multi-Agent Collaboration [75.5241464256688]
Pioneering advancements in large language model-powered agents have underscored the design pattern of multi-agent collaboration.
Inspired by the neural scaling law, this study investigates whether a similar principle applies to increasing agents in multi-agent collaboration.
arXiv Detail & Related papers (2024-06-11T11:02:04Z) - Online Learning of Human Constraints from Feedback in Shared Autonomy [25.173950581816086]
Real-time collaboration with humans poses challenges due to the different behavior patterns of humans resulting from diverse physical constraints.
We learn a human constraints model that considers the diverse behaviors of different human operators.
We propose an augmentative assistant agent capable of learning and adapting to human physical constraints.
arXiv Detail & Related papers (2024-03-05T13:53:48Z) - Large Language Model-based Human-Agent Collaboration for Complex Task
Solving [94.3914058341565]
We introduce the problem of Large Language Models (LLMs)-based human-agent collaboration for complex task-solving.
We propose a Reinforcement Learning-based Human-Agent Collaboration method, ReHAC.
This approach includes a policy model designed to determine the most opportune stages for human intervention within the task-solving process.
arXiv Detail & Related papers (2024-02-20T11:03:36Z) - Efficient Human-AI Coordination via Preparatory Language-based
Convention [17.840956842806975]
Existing methods for human-AI coordination typically train an agent to coordinate with a diverse set of policies or with human models fitted from real human data.
We propose employing the large language model (LLM) to develop an action plan that effectively guides both human and AI.
Our method achieves better alignment with human preferences and an average performance improvement of 15% compared to the state-of-the-art.
arXiv Detail & Related papers (2023-11-01T10:18:23Z) - Increased Complexity of a Human-Robot Collaborative Task May Increase
the Need for a Socially Competent Robot [0.0]
This study investigates how task complexity affects human perception and acceptance of their robot partner.
We propose a human-based robot control model for obstacle avoidance that can account for the leader-follower dynamics.
arXiv Detail & Related papers (2022-07-11T11:43:27Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - Multi-Agent Imitation Learning with Copulas [102.27052968901894]
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions.
In this paper, we propose to use copula, a powerful statistical tool for capturing dependence among random variables, to explicitly model the correlation and coordination in multi-agent systems.
Our proposed model is able to separately learn marginals that capture the local behavioral patterns of each individual agent, as well as a copula function that solely and fully captures the dependence structure among agents.
arXiv Detail & Related papers (2021-07-10T03:49:41Z) - LEMMA: A Multi-view Dataset for Learning Multi-agent Multi-task
Activities [119.88381048477854]
We introduce the LEMMA dataset to provide a single home to address missing dimensions with meticulously designed settings.
We densely annotate the atomic-actions with human-object interactions to provide ground-truths of the compositionality, scheduling, and assignment of daily activities.
We hope this effort would drive the machine vision community to examine goal-directed human activities and further study the task scheduling and assignment in the real world.
arXiv Detail & Related papers (2020-07-31T00:13:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.