Mxplainer: Explain and Learn Insights by Imitating Mahjong Agents
- URL: http://arxiv.org/abs/2506.14246v1
- Date: Tue, 17 Jun 2025 07:07:13 GMT
- Title: Mxplainer: Explain and Learn Insights by Imitating Mahjong Agents
- Authors: Lingfeng Li, Yunlong Lu, Yongyi Wang, Qifan Zheng, Wenxin Li,
- Abstract summary: This paper introduces Mxplainer, a parameterized search algorithm that can be converted into an equivalent neural network to learn the parameters of black-box agents.<n>Experiments conducted on AI and human player data demonstrate that the learned parameters provide human-understandable insights into these agents' characteristics and play styles.
- Score: 0.8088999193162028
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: People need to internalize the skills of AI agents to improve their own capabilities. Our paper focuses on Mahjong, a multiplayer game involving imperfect information and requiring effective long-term decision-making amidst randomness and hidden information. Through the efforts of AI researchers, several impressive Mahjong AI agents have already achieved performance levels comparable to those of professional human players; however, these agents are often treated as black boxes from which few insights can be gleaned. This paper introduces Mxplainer, a parameterized search algorithm that can be converted into an equivalent neural network to learn the parameters of black-box agents. Experiments conducted on AI and human player data demonstrate that the learned parameters provide human-understandable insights into these agents' characteristics and play styles. In addition to analyzing the learned parameters, we also showcase how our search-based framework can locally explain the decision-making processes of black-box agents for most Mahjong game states.
Related papers
- FAIRGAME: a Framework for AI Agents Bias Recognition using Game Theory [51.96049148869987]
We present FAIRGAME, a Framework for AI Agents Bias Recognition using Game Theory.<n>We describe its implementation and usage, and we employ it to uncover biased outcomes in popular games among AI agents.<n>Overall, FAIRGAME allows users to reliably and easily simulate their desired games and scenarios.
arXiv Detail & Related papers (2025-04-19T15:29:04Z) - AVA: Attentive VLM Agent for Mastering StarCraft II [56.07921367623274]
We introduce Attentive VLM Agent (AVA), a multimodal StarCraft II agent that aligns artificial agent perception with the human gameplay experience.<n>Our agent addresses this limitation by incorporating RGB visual inputs and natural language observations that more closely simulate human cognitive processes during gameplay.
arXiv Detail & Related papers (2025-03-07T12:54:25Z) - Memento No More: Coaching AI Agents to Master Multiple Tasks via Hints Internalization [56.674356045200696]
We propose a novel method to train AI agents to incorporate knowledge and skills for multiple tasks without the need for cumbersome note systems or prior high-quality demonstration data.<n>Our approach employs an iterative process where the agent collects new experiences, receives corrective feedback from humans in the form of hints, and integrates this feedback into its weights.<n>We demonstrate the efficacy of our approach by implementing it in a Llama-3-based agent that, after only a few rounds of feedback, outperforms advanced models GPT-4o and DeepSeek-V3 in tasksets.
arXiv Detail & Related papers (2025-02-03T17:45:46Z) - Behavioural Cloning in VizDoom [1.4999444543328293]
This paper describes methods for training autonomous agents to play the game "Doom 2" through Imitation Learning (IL)
We also explore how Reinforcement Learning (RL) compares to IL for humanness by comparing camera movement and trajectory data.
arXiv Detail & Related papers (2024-01-08T16:15:43Z) - Deciphering Digital Detectives: Understanding LLM Behaviors and
Capabilities in Multi-Agent Mystery Games [26.07074182316433]
We introduce the first dataset specifically for Jubensha, including character scripts and game rules.
Our work also presents a unique multi-agent interaction framework using LLMs, allowing AI agents to autonomously engage in this game.
To evaluate the gaming performance of these AI agents, we developed novel methods measuring their mastery of case information and reasoning skills.
arXiv Detail & Related papers (2023-12-01T17:33:57Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - Explainability via Responsibility [0.9645196221785693]
We present an approach to explainable artificial intelligence in which certain training instances are offered to human users.
We evaluate this approach by approximating its ability to provide human users with the explanations of AI agent's actions.
arXiv Detail & Related papers (2020-10-04T20:41:03Z) - AI solutions for drafting in Magic: the Gathering [0.0]
We present a dataset of over 100,000 simulated, anonymized human drafts collected from Draftsim.com.
We propose four diverse strategies for drafting agents, including a primitive drafting agent, an expert-tuned complex agent, a Naive Bayes agent, and a deep neural network agent.
This work helps to identify next steps in the creation of humanlike drafting agents, and can serve as a benchmark for the next generation of drafting bots.
arXiv Detail & Related papers (2020-09-01T18:44:10Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Navigating the Landscape of Multiplayer Games [20.483315340460127]
We show how network measures applied to response graphs of large-scale games enable the creation of a landscape of games.
We illustrate our findings in domains ranging from canonical games to complex empirical games capturing the performance of trained agents pitted against one another.
arXiv Detail & Related papers (2020-05-04T16:58:17Z) - Learning from Learners: Adapting Reinforcement Learning Agents to be
Competitive in a Card Game [71.24825724518847]
We present a study on how popular reinforcement learning algorithms can be adapted to learn and to play a real-world implementation of a competitive multiplayer card game.
We propose specific training and validation routines for the learning agents, in order to evaluate how the agents learn to be competitive and explain how they adapt to each others' playing style.
arXiv Detail & Related papers (2020-04-08T14:11:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.