Adapting to Teammates in a Cooperative Language Game
- URL: http://arxiv.org/abs/2403.00823v1
- Date: Mon, 26 Feb 2024 23:15:07 GMT
- Title: Adapting to Teammates in a Cooperative Language Game
- Authors: Christopher Archibald and Spencer Brosnahan
- Abstract summary: This paper presents the first adaptive agent for playing Codenames.
We adopt an ensemble approach with the goal of determining, during the course of interacting with a specific teammate, which of our internal expert agents is the best match.
Experimental analysis shows that this ensemble approach adapts to individual teammates and often performs nearly as well as the best internal expert with a teammate.
- Score: 1.082078800505043
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The game of Codenames has recently emerged as a domain of interest for
intelligent agent design. The game is unique due to the way that language and
coordination between teammates play important roles. Previous approaches to
designing agents for this game have utilized a single internal language model
to determine action choices. This often leads to good performance with some
teammates and inferior performance with other teammates, as the agent cannot
adapt to any specific teammate. In this paper we present the first adaptive
agent for playing Codenames. We adopt an ensemble approach with the goal of
determining, during the course of interacting with a specific teammate, which
of our internal expert agents, each potentially with its own language model, is
the best match. One difficulty faced in this approach is the lack of a single
numerical metric that accurately captures the performance of a Codenames team.
Prior Codenames research has utilized a handful of different metrics to
evaluate agent teams. We propose a novel single metric to evaluate the
performance of a Codenames team, whether playing a single team (solitaire)
game, or a competitive game against another team. We then present and analyze
an ensemble agent which selects an internal expert on each turn in order to
maximize this proposed metric. Experimental analysis shows that this ensemble
approach adapts to individual teammates and often performs nearly as well as
the best internal expert with a teammate. Crucially, this success does not
depend on any previous knowledge about the teammates, the ensemble agents, or
their compatibility. This research represents an important step to making
language-based agents for cooperative language settings like Codenames more
adaptable to individual teammates.
Related papers
- ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Collusion Detection in Team-Based Multiplayer Games [57.153233321515984]
We propose a system that detects colluding behaviors in team-based multiplayer games.
The proposed method analyzes the players' social relationships paired with their in-game behavioral patterns.
We then automate the detection using Isolation Forest, an unsupervised learning technique specialized in highlighting outliers.
arXiv Detail & Related papers (2022-03-10T02:37:39Z) - Evaluation of Human-AI Teams for Learned and Rule-Based Agents in Hanabi [0.0]
We evaluate teams of humans and AI agents in the cooperative card game emphHanabi with both rule-based and learning-based agents.
We find that humans have a clear preference toward a rule-based AI teammate over a state-of-the-art learning-based AI teammate.
arXiv Detail & Related papers (2021-07-15T22:19:15Z) - Coach-Player Multi-Agent Reinforcement Learning for Dynamic Team
Composition [88.26752130107259]
In real-world multiagent systems, agents with different capabilities may join or leave without altering the team's overarching goals.
We propose COPA, a coach-player framework to tackle this problem.
We 1) adopt the attention mechanism for both the coach and the players; 2) propose a variational objective to regularize learning; and 3) design an adaptive communication method to let the coach decide when to communicate with the players.
arXiv Detail & Related papers (2021-05-18T17:27:37Z) - Multi-Agent Collaboration via Reward Attribution Decomposition [75.36911959491228]
We propose Collaborative Q-learning (CollaQ) that achieves state-of-the-art performance in the StarCraft multi-agent challenge.
CollaQ is evaluated on various StarCraft Attribution maps and shows that it outperforms existing state-of-the-art techniques.
arXiv Detail & Related papers (2020-10-16T17:42:11Z) - My Team Will Go On: Differentiating High and Low Viability Teams through
Team Interaction [17.729317295204368]
We train a viability classification model over a dataset of 669 10-minute text conversations of online teams.
We find that a lasso regression model achieves an accuracy of.74--.92 AUC ROC under different thresholds of classifying viability scores.
arXiv Detail & Related papers (2020-10-14T21:33:36Z) - Faster Algorithms for Optimal Ex-Ante Coordinated Collusive Strategies
in Extensive-Form Zero-Sum Games [123.76716667704625]
We focus on the problem of finding an optimal strategy for a team of two players that faces an opponent in an imperfect-information zero-sum extensive-form game.
In that setting, it is known that the best the team can do is sample a profile of potentially randomized strategies (one per player) from a joint (a.k.a. correlated) probability distribution at the beginning of the game.
We provide an algorithm that computes such an optimal distribution by only using profiles where only one of the team members gets to randomize in each profile.
arXiv Detail & Related papers (2020-09-21T17:51:57Z) - Finding Core Members of Cooperative Games using Agent-Based Modeling [0.0]
Agent-based modeling (ABM) is a powerful paradigm to gain insight into social phenomena.
In this paper, a algorithm is developed that can be embedded into an ABM to allow the agents to find coalition.
arXiv Detail & Related papers (2020-08-30T17:38:43Z) - On Emergent Communication in Competitive Multi-Agent Teams [116.95067289206919]
We investigate whether competition for performance from an external, similar agent team could act as a social influence.
Our results show that an external competitive influence leads to improved accuracy and generalization, as well as faster emergence of communicative languages.
arXiv Detail & Related papers (2020-03-04T01:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.