Towards Effective and Interpretable Human-Agent Collaboration in MOBA
Games: A Communication Perspective
- URL: http://arxiv.org/abs/2304.11632v1
- Date: Sun, 23 Apr 2023 12:11:04 GMT
- Title: Towards Effective and Interpretable Human-Agent Collaboration in MOBA
Games: A Communication Perspective
- Authors: Yiming Gao, Feiyu Liu, Liang Wang, Zhenjie Lian, Weixuan Wang, Siqin
Li, Xianliang Wang, Xianhan Zeng, Rundong Wang, Jiawei Wang, Qiang Fu, Wei
Yang, Lanxiao Huang, Wei Liu
- Abstract summary: This paper makes the first attempt to investigate human-agent collaboration in MOBA games.
We propose to enable humans and agents to collaborate through explicit communication by designing an efficient Meta-Command Communication-based framework.
We show that MCC agents can collaborate reasonably well with human teammates and even generalize to collaborate with different levels and numbers of human teammates.
- Score: 23.600139293202336
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: MOBA games, e.g., Dota2 and Honor of Kings, have been actively used as the
testbed for the recent AI research on games, and various AI systems have been
developed at the human level so far. However, these AI systems mainly focus on
how to compete with humans, less on exploring how to collaborate with humans.
To this end, this paper makes the first attempt to investigate human-agent
collaboration in MOBA games. In this paper, we propose to enable humans and
agents to collaborate through explicit communication by designing an efficient
and interpretable Meta-Command Communication-based framework, dubbed MCC, for
accomplishing effective human-agent collaboration in MOBA games. The MCC
framework consists of two pivotal modules: 1) an interpretable communication
protocol, i.e., the Meta-Command, to bridge the communication gap between
humans and agents; 2) a meta-command value estimator, i.e., the Meta-Command
Selector, to select a valuable meta-command for each agent to achieve effective
human-agent collaboration. Experimental results in Honor of Kings demonstrate
that MCC agents can collaborate reasonably well with human teammates and even
generalize to collaborate with different levels and numbers of human teammates.
Videos are available at https://sites.google.com/view/mcc-demo.
Related papers
- Mutual Theory of Mind in Human-AI Collaboration: An Empirical Study with LLM-driven AI Agents in a Real-time Shared Workspace Task [56.92961847155029]
Theory of Mind (ToM) significantly impacts human collaboration and communication as a crucial capability to understand others.
Mutual Theory of Mind (MToM) arises when AI agents with ToM capability collaborate with humans.
We find that the agent's ToM capability does not significantly impact team performance but enhances human understanding of the agent.
arXiv Detail & Related papers (2024-09-13T13:19:48Z) - Toward Human-AI Alignment in Large-Scale Multi-Player Games [24.784173202415687]
We analyze extensive human gameplay data from Xbox's Bleeding Edge (100K+ games)
We find that while human players exhibit variability in fight-flight and explore-exploit behavior, AI players tend towards uniformity.
These stark differences underscore the need for interpretable evaluation, design, and integration of AI in human-aligned applications.
arXiv Detail & Related papers (2024-02-05T22:55:33Z) - Enhancing Human Experience in Human-Agent Collaboration: A
Human-Centered Modeling Approach Based on Positive Human Gain [18.968232976619912]
We propose a "human-centered" modeling scheme for collaborative AI agents.
We expect that agents should learn to enhance the extent to which humans achieve these goals while maintaining agents' original abilities.
We evaluate the RLHG agent in the popular Multi-player Online Battle Arena (MOBA) game, Honor of Kings.
arXiv Detail & Related papers (2024-01-28T05:05:57Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Building Cooperative Embodied Agents Modularly with Large Language
Models [104.57849816689559]
We address challenging multi-agent cooperation problems with decentralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments.
We harness the commonsense knowledge, reasoning ability, language comprehension, and text generation prowess of LLMs and seamlessly incorporate them into a cognitive-inspired modular framework.
Our experiments on C-WAH and TDW-MAT demonstrate that CoELA driven by GPT-4 can surpass strong planning-based methods and exhibit emergent effective communication.
arXiv Detail & Related papers (2023-07-05T17:59:27Z) - CAMEL: Communicative Agents for "Mind" Exploration of Large Language
Model Society [58.04479313658851]
This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents.
We propose a novel communicative agent framework named role-playing.
Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems.
arXiv Detail & Related papers (2023-03-31T01:09:00Z) - Multi-Agent Collaboration via Reward Attribution Decomposition [75.36911959491228]
We propose Collaborative Q-learning (CollaQ) that achieves state-of-the-art performance in the StarCraft multi-agent challenge.
CollaQ is evaluated on various StarCraft Attribution maps and shows that it outperforms existing state-of-the-art techniques.
arXiv Detail & Related papers (2020-10-16T17:42:11Z) - Pow-Wow: A Dataset and Study on Collaborative Communication in Pommerman [12.498028338281625]
In multi-agent learning, agents must coordinate with each other in order to succeed. For humans, this coordination is typically accomplished through the use of language.
We construct Pow-Wow, a new dataset for studying situated goal-directed human communication.
We analyze the types of communications which result in effective game strategies, annotate them accordingly, and present corpus-level statistical analysis of how trends in communications affect game outcomes.
arXiv Detail & Related papers (2020-09-13T07:11:37Z) - Improving Multi-Agent Cooperation using Theory of Mind [4.769747792846005]
We investigate how much an explicit representation of others' intentions improves performance in a cooperative game.
We find that teams with ToM agents significantly outperform non-ToM agents when collaborating with all types of partners.
These findings have implications for designing better cooperative agents.
arXiv Detail & Related papers (2020-07-30T19:31:31Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z) - Real-World Human-Robot Collaborative Reinforcement Learning [6.089774484591287]
We present a real-world setup of a human-robot collaborative maze game, designed to be non-trivial and only solvable through collaboration.
We use deep reinforcement learning for the control of the robotic agent, and achieve results within 30 minutes of real-world play.
We present results on how co-policy learning occurs over time between the human and the robotic agent resulting in each participant's agent serving as a representation of how they would play the game.
arXiv Detail & Related papers (2020-03-02T19:34:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.