AI solutions for drafting in Magic: the Gathering
- URL: http://arxiv.org/abs/2009.00655v3
- Date: Sun, 4 Apr 2021 19:13:53 GMT
- Title: AI solutions for drafting in Magic: the Gathering
- Authors: Henry N. Ward, Daniel J. Brooks, Dan Troha, Bobby Mills, Arseny S.
Khakhalin
- Abstract summary: We present a dataset of over 100,000 simulated, anonymized human drafts collected from Draftsim.com.
We propose four diverse strategies for drafting agents, including a primitive drafting agent, an expert-tuned complex agent, a Naive Bayes agent, and a deep neural network agent.
This work helps to identify next steps in the creation of humanlike drafting agents, and can serve as a benchmark for the next generation of drafting bots.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Drafting in Magic the Gathering is a sub-game within a larger trading card
game, where several players progressively build decks by picking cards from a
common pool. Drafting poses an interesting problem for game and AI research due
to its large search space, mechanical complexity, multiplayer nature, and
hidden information. Despite this, drafting remains understudied, in part due to
a lack of high-quality, public datasets. To rectify this problem, we present a
dataset of over 100,000 simulated, anonymized human drafts collected from
Draftsim.com. We also propose four diverse strategies for drafting agents,
including a primitive heuristic agent, an expert-tuned complex heuristic agent,
a Naive Bayes agent, and a deep neural network agent. We benchmark their
ability to emulate human drafting, and show that the deep neural network agent
outperforms other agents, while the Naive Bayes and expert-tuned agents
outperform simple heuristics. We analyze the accuracy of AI agents across the
timeline of a draft, and describe unique strengths and weaknesses for each
approach. This work helps to identify next steps in the creation of humanlike
drafting agents, and can serve as a benchmark for the next generation of
drafting bots.
Related papers
- AgentGym: Evolving Large Language Model-based Agents across Diverse Environments [116.97648507802926]
Large language models (LLMs) are considered a promising foundation to build such agents.
We take the first step towards building generally-capable LLM-based agents with self-evolution ability.
We propose AgentGym, a new framework featuring a variety of environments and tasks for broad, real-time, uni-format, and concurrent agent exploration.
arXiv Detail & Related papers (2024-06-06T15:15:41Z) - Toward Human-AI Alignment in Large-Scale Multi-Player Games [24.784173202415687]
We analyze extensive human gameplay data from Xbox's Bleeding Edge (100K+ games)
We find that while human players exhibit variability in fight-flight and explore-exploit behavior, AI players tend towards uniformity.
These stark differences underscore the need for interpretable evaluation, design, and integration of AI in human-aligned applications.
arXiv Detail & Related papers (2024-02-05T22:55:33Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - A Technique to Create Weaker Abstract Board Game Agents via
Reinforcement Learning [0.0]
Board games need at least one other player to play.
We created AI agents to play against us when an opponent is missing.
In this work, we describe how to create weaker AI agents that play board games.
arXiv Detail & Related papers (2022-09-01T20:13:20Z) - Collusion Detection in Team-Based Multiplayer Games [57.153233321515984]
We propose a system that detects colluding behaviors in team-based multiplayer games.
The proposed method analyzes the players' social relationships paired with their in-game behavioral patterns.
We then automate the detection using Isolation Forest, an unsupervised learning technique specialized in highlighting outliers.
arXiv Detail & Related papers (2022-03-10T02:37:39Z) - The MineRL BASALT Competition on Learning from Human Feedback [58.17897225617566]
The MineRL BASALT competition aims to spur forward research on this important class of techniques.
We design a suite of four tasks in Minecraft for which we expect it will be hard to write down hardcoded reward functions.
We provide a dataset of human demonstrations on each of the four tasks, as well as an imitation learning baseline.
arXiv Detail & Related papers (2021-07-05T12:18:17Z) - Discovering Multi-Agent Auto-Curricula in Two-Player Zero-Sum Games [31.97631243571394]
We introduce a framework, LMAC, that automates the discovery of the update rule without explicit human design.
Surprisingly, even without human design, the discovered MARL algorithms achieve competitive or even better performance.
We show that LMAC is able to generalise from small games to large games, for example training on Kuhn Poker and outperforming PSRO.
arXiv Detail & Related papers (2021-06-04T22:30:25Z) - Towards Socially Intelligent Agents with Mental State Transition and
Human Utility [97.01430011496576]
We propose to incorporate a mental state and utility model into dialogue agents.
The hybrid mental state extracts information from both the dialogue and event observations.
The utility model is a ranking model that learns human preferences from a crowd-sourced social commonsense dataset.
arXiv Detail & Related papers (2021-03-12T00:06:51Z) - Using Graph Convolutional Networks and TD($\lambda$) to play the game of
Risk [0.0]
Risk is a 6 player game with significant randomness and a large game-tree complexity.
Previous AIs focus on creating high-level handcrafted features determine agent decision making.
I create D.A.D, A Risk agent using temporal difference reinforcement learning to train a Deep Neural Network.
arXiv Detail & Related papers (2020-09-10T18:47:08Z) - Navigating the Landscape of Multiplayer Games [20.483315340460127]
We show how network measures applied to response graphs of large-scale games enable the creation of a landscape of games.
We illustrate our findings in domains ranging from canonical games to complex empirical games capturing the performance of trained agents pitted against one another.
arXiv Detail & Related papers (2020-05-04T16:58:17Z) - Signaling in Bayesian Network Congestion Games: the Subtle Power of
Symmetry [66.82463322411614]
The paper focuses on the problem of optimal ex ante persuasive signaling schemes, showing that symmetry is a crucial property for its solution.
We show that an optimal ex ante persuasive scheme can be computed in time when players are symmetric and have affine cost functions.
arXiv Detail & Related papers (2020-02-12T19:38:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.