Evolving Dyadic Strategies for a Cooperative Physical Task
- URL: http://arxiv.org/abs/2004.10558v1
- Date: Wed, 22 Apr 2020 13:23:12 GMT
- Title: Evolving Dyadic Strategies for a Cooperative Physical Task
- Authors: Saber Sheybani, Eduardo J. Izquierdo, Eatai Roth
- Abstract summary: We evolve simulated agents to explore a space of feasible role-switching policies.
Applying these switching policies in a cooperative manual task, agents process visual and haptic cues to decide when to switch roles.
We find that the best performing dyads exhibit high temporal coordination (anti-synchrony)
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many cooperative physical tasks require that individuals play specialized
roles (e.g., leader-follower). Humans are adept cooperators, negotiating these
roles and transitions between roles innately. Yet how roles are delegated and
reassigned is not well understood. Using a genetic algorithm, we evolve
simulated agents to explore a space of feasible role-switching policies.
Applying these switching policies in a cooperative manual task, agents process
visual and haptic cues to decide when to switch roles. We then analyze the
evolved virtual population for attributes typically associated with
cooperation: load sharing and temporal coordination. We find that the best
performing dyads exhibit high temporal coordination (anti-synchrony). And in
turn, anti-synchrony is correlated to symmetry between the parameters of the
cooperative agents. These simulations furnish hypotheses as to how human
cooperators might mediate roles in dyadic tasks.
Related papers
- Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - Deconstructing Cooperation and Ostracism via Multi-Agent Reinforcement
Learning [3.3751859064985483]
We show that network rewiring facilitates mutual cooperation even when one agent always offers cooperation.
We also find that ostracism alone is not sufficient to make cooperation emerge.
Our findings provide insights into the conditions and mechanisms necessary for the emergence of cooperation.
arXiv Detail & Related papers (2023-10-06T23:18:55Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - A Unified Architecture for Dynamic Role Allocation and Collaborative
Task Planning in Mixed Human-Robot Teams [0.0]
We present a novel architecture for dynamic role allocation and collaborative task planning in a mixed human-robot team of arbitrary size.
The architecture capitalizes on a centralized reactive and modular task-agnostic planning method based on Behavior Trees (BTs)
Different metrics used as MILP cost allow the architecture to favor various aspects of the collaboration.
arXiv Detail & Related papers (2023-01-19T12:30:56Z) - The art of compensation: how hybrid teams solve collective risk dilemmas [6.081979963786028]
We study the evolutionary dynamics of cooperation in a hybrid population made of both adaptive and fixed-behavior agents.
We show how the first learn to adapt their behavior to compensate for the behavior of the latter.
arXiv Detail & Related papers (2022-05-13T13:23:42Z) - LDSA: Learning Dynamic Subtask Assignment in Cooperative Multi-Agent
Reinforcement Learning [122.47938710284784]
We propose a novel framework for learning dynamic subtask assignment (LDSA) in cooperative MARL.
To reasonably assign agents to different subtasks, we propose an ability-based subtask selection strategy.
We show that LDSA learns reasonable and effective subtask assignment for better collaboration.
arXiv Detail & Related papers (2022-05-05T10:46:16Z) - Behaviour-conditioned policies for cooperative reinforcement learning
tasks [41.74498230885008]
In various real-world tasks, an agent needs to cooperate with unknown partner agent types.
Deep reinforcement learning models can be trained to deliver the required functionality but are known to suffer from sample inefficiency and slow learning.
We suggest a method, where we synthetically produce populations of agents with different behavioural patterns together with ground truth data of their behaviour.
We additionally suggest an agent architecture, which can efficiently use the generated data and gain the meta-learning capability.
arXiv Detail & Related papers (2021-10-04T09:16:41Z) - On the Critical Role of Conventions in Adaptive Human-AI Collaboration [73.21967490610142]
We propose a learning framework that teases apart rule-dependent representation from convention-dependent representation.
We experimentally validate our approach on three collaborative tasks varying in complexity.
arXiv Detail & Related papers (2021-04-07T02:46:19Z) - A Cordial Sync: Going Beyond Marginal Policies for Multi-Agent Embodied
Tasks [111.34055449929487]
We introduce the novel task FurnMove in which agents work together to move a piece of furniture through a living room to a goal.
Unlike existing tasks, FurnMove requires agents to coordinate at every timestep.
We identify two challenges when training agents to complete FurnMove: existing decentralized action sampling procedures do not permit expressive joint action policies.
Using SYNC-policies and CORDIAL, our agents achieve a 58% completion rate on FurnMove, an impressive absolute gain of 25 percentage points over competitive decentralized baselines.
arXiv Detail & Related papers (2020-07-09T17:59:57Z) - Too many cooks: Bayesian inference for coordinating multi-agent
collaboration [55.330547895131986]
Collaboration requires agents to coordinate their behavior on the fly.
Underlying the human ability to collaborate is theory-of-mind, the ability to infer the hidden mental states that drive others to act.
We develop Bayesian Delegation, a decentralized multi-agent learning mechanism with these abilities.
arXiv Detail & Related papers (2020-03-26T07:43:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.