"Teammates, Am I Clear?": Analysing Legible Behaviours in Teams
- URL: http://arxiv.org/abs/2507.21631v1
- Date: Tue, 29 Jul 2025 09:40:18 GMT
- Title: "Teammates, Am I Clear?": Analysing Legible Behaviours in Teams
- Authors: Miguel Faria, Francisco S. Melo, Ana Paiva,
- Abstract summary: We propose an extension of legible decision-making to multi-agent settings.<n>We show that a team with a legible agent is able to outperform a team composed solely of agents with standard optimal behaviour.
- Score: 6.542036882626739
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we investigate the notion of legibility in sequential decision-making in the context of teams and teamwork. There have been works that extend the notion of legibility to sequential decision making, for deterministic and for stochastic scenarios. However, these works focus on one agent interacting with one human, foregoing the benefits of having legible decision making in teams of agents or in team configurations with humans. In this work we propose an extension of legible decision-making to multi-agent settings that improves the performance of agents working in collaboration. We showcase the performance of legible decision making in team scenarios using our proposed extension in multi-agent benchmark scenarios. We show that a team with a legible agent is able to outperform a team composed solely of agents with standard optimal behaviour.
Related papers
- Don't lie to your friends: Learning what you know from collaborative self-play [90.35507959579331]
We propose a radically new approach to teaching AI agents what they know.<n>We construct multi-agent collaborations in which the group is rewarded for collectively arriving at correct answers.<n>The desired meta-knowledge emerges from the incentives built into the structure of the interaction.
arXiv Detail & Related papers (2025-03-18T17:53:20Z) - Language Agents as Digital Representatives in Collective Decision-Making [22.656601943922066]
"representation" is the activity of making an individual's preferences present in the process via participation by a proxy agent.<n>We investigate the possibility of training textitlanguage agents to behave in the capacity of representatives of human agents.
arXiv Detail & Related papers (2025-02-13T14:35:40Z) - Optimizing Risk-averse Human-AI Hybrid Teams [1.433758865948252]
We propose a manager which learns, through a standard Reinforcement Learning scheme, how to best delegate.
We demonstrate the optimality of our manager's performance in several grid environments.
Our results show our manager can successfully learn desirable delegations which result in team paths near/exactly optimal.
arXiv Detail & Related papers (2024-03-13T09:49:26Z) - Adapting to Teammates in a Cooperative Language Game [1.082078800505043]
This paper presents the first adaptive agent for playing Codenames.
We adopt an ensemble approach with the goal of determining, during the course of interacting with a specific teammate, which of our internal expert agents is the best match.
Experimental analysis shows that this ensemble approach adapts to individual teammates and often performs nearly as well as the best internal expert with a teammate.
arXiv Detail & Related papers (2024-02-26T23:15:07Z) - AgentCF: Collaborative Learning with Autonomous Language Agents for
Recommender Systems [112.76941157194544]
We propose AgentCF for simulating user-item interactions in recommender systems through agent-based collaborative filtering.
We creatively consider not only users but also items as agents, and develop a collaborative learning approach that optimize both kinds of agents together.
Overall, the optimized agents exhibit diverse interaction behaviors within our framework, including user-item, user-user, item-item, and collective interactions.
arXiv Detail & Related papers (2023-10-13T16:37:14Z) - A Dynamic LLM-Powered Agent Network for Task-Oriented Agent Collaboration [55.35849138235116]
We propose automatically selecting a team of agents from candidates to collaborate in a dynamic communication structure toward different tasks and domains.
Specifically, we build a framework named Dynamic LLM-Powered Agent Network ($textDyLAN$) for LLM-powered agent collaboration.
We demonstrate that DyLAN outperforms strong baselines in code generation, decision-making, general reasoning, and arithmetic reasoning tasks with moderate computational cost.
arXiv Detail & Related papers (2023-10-03T16:05:48Z) - Optimizing delegation between human and AI collaborative agents [1.6114012813668932]
We train a delegating manager agent to make delegation decisions with respect to potential performance deficiencies.
Our framework learns through observations of team performance without restricting agents to matching dynamics.
Our results show our manager learns to perform delegation decisions with teams of agents operating under differing representations of the environment.
arXiv Detail & Related papers (2023-09-26T07:23:26Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Informational Diversity and Affinity Bias in Team Growth Dynamics [6.729250803621849]
We show that the benefits of informational diversity are in tension with affinity bias.
Our results formalize a fundamental limitation of utility-based motivations to drive informational diversity.
arXiv Detail & Related papers (2023-01-28T05:02:40Z) - Multi-agent Deep Covering Skill Discovery [50.812414209206054]
We propose Multi-agent Deep Covering Option Discovery, which constructs the multi-agent options through minimizing the expected cover time of the multiple agents' joint state space.
Also, we propose a novel framework to adopt the multi-agent options in the MARL process.
We show that the proposed algorithm can effectively capture the agent interactions with the attention mechanism, successfully identify multi-agent options, and significantly outperforms prior works using single-agent options or no options.
arXiv Detail & Related papers (2022-10-07T00:40:59Z) - Towards Socially Intelligent Agents with Mental State Transition and
Human Utility [97.01430011496576]
We propose to incorporate a mental state and utility model into dialogue agents.
The hybrid mental state extracts information from both the dialogue and event observations.
The utility model is a ranking model that learns human preferences from a crowd-sourced social commonsense dataset.
arXiv Detail & Related papers (2021-03-12T00:06:51Z) - My Team Will Go On: Differentiating High and Low Viability Teams through
Team Interaction [17.729317295204368]
We train a viability classification model over a dataset of 669 10-minute text conversations of online teams.
We find that a lasso regression model achieves an accuracy of.74--.92 AUC ROC under different thresholds of classifying viability scores.
arXiv Detail & Related papers (2020-10-14T21:33:36Z) - Moody Learners -- Explaining Competitive Behaviour of Reinforcement
Learning Agents [65.2200847818153]
In a competitive scenario, the agent does not only have a dynamic environment but also is directly affected by the opponents' actions.
Observing the Q-values of the agent is usually a way of explaining its behavior, however, do not show the temporal-relation between the selected actions.
arXiv Detail & Related papers (2020-07-30T11:30:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.