Teamwork under extreme uncertainty: AI for Pokemon ranks 33rd in the
world
- URL: http://arxiv.org/abs/2212.13338v1
- Date: Tue, 27 Dec 2022 01:52:52 GMT
- Title: Teamwork under extreme uncertainty: AI for Pokemon ranks 33rd in the
world
- Authors: Nicholas R. Sarantinos
- Abstract summary: This paper describes the mechanics of the game and we perform a game analysis.
We propose unique AI algorithms based on our understanding that the two biggest challenges in the game are keeping a balanced team and dealing with three sources of uncertainty.
Our AI agent performed significantly better than all previous attempts and peaked at the 33rd place in the world, in one of the most popular battle formats, while running on only 4 single socket servers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The highest grossing media franchise of all times, with over \$90 billion in
total revenue, is Pokemon. The video games belong to the class of Japanese Role
Playing Games (J-RPG). Developing a powerful AI agent for these games is very
hard because they present big challenges to MinMax, Monte Carlo Tree Search and
statistical Machine Learning, as they are vastly different from the well
explored in AI literature games. An AI agent for one of these games means
significant progress in AI agents for the entire class. Further, the key
principles of such work can hopefully inspire approaches to several domains
that require excellent teamwork under conditions of extreme uncertainty,
including managing a team of doctors, robots or employees in an ever changing
environment, like a pandemic stricken region or a war-zone. In this paper we
first explain the mechanics of the game and we perform a game analysis. We
continue by proposing unique AI algorithms based on our understanding that the
two biggest challenges in the game are keeping a balanced team and dealing with
three sources of uncertainty. Later on, we describe why evaluating the
performance of such agents is challenging and we present the results of our
approach. Our AI agent performed significantly better than all previous
attempts and peaked at the 33rd place in the world, in one of the most popular
battle formats, while running on only 4 single socket servers.
Related papers
- Reinforcement Learning for High-Level Strategic Control in Tower Defense Games [47.618236610219554]
In strategy games, one of the most important aspects of game design is maintaining a sense of challenge for players.
We propose an automated approach that combines traditional scripted methods with reinforcement learning.
Results show that combining a learned approach, such as reinforcement learning, with a scripted AI produces a higher-performing and more robust agent than using only AI.
arXiv Detail & Related papers (2024-06-12T08:06:31Z) - Toward Human-AI Alignment in Large-Scale Multi-Player Games [24.784173202415687]
We analyze extensive human gameplay data from Xbox's Bleeding Edge (100K+ games)
We find that while human players exhibit variability in fight-flight and explore-exploit behavior, AI players tend towards uniformity.
These stark differences underscore the need for interpretable evaluation, design, and integration of AI in human-aligned applications.
arXiv Detail & Related papers (2024-02-05T22:55:33Z) - DanZero+: Dominating the GuanDan Game through Reinforcement Learning [95.90682269990705]
We develop an AI program for an exceptionally complex and popular card game called GuanDan.
We first put forward an AI program named DanZero for this game.
In order to further enhance the AI's capabilities, we apply policy-based reinforcement learning algorithm to GuanDan.
arXiv Detail & Related papers (2023-12-05T08:07:32Z) - Diversifying AI: Towards Creative Chess with AlphaZero [22.169342583475938]
We study whether a team of diverse AI systems can outperform a single AI in challenging tasks by generating more ideas as a group and then selecting the best ones.
Our experiments suggest that AZ_db plays chess in diverse ways, solves more puzzles as a group and outperforms a more homogeneous team.
Our findings suggest that diversity bonuses emerge in teams of AI agents, just as they do in teams of humans.
arXiv Detail & Related papers (2023-08-17T20:27:33Z) - Are AlphaZero-like Agents Robust to Adversarial Perturbations? [73.13944217915089]
AlphaZero (AZ) has demonstrated that neural-network-based Go AIs can surpass human performance by a large margin.
We ask whether adversarial states exist for Go AIs that may lead them to play surprisingly wrong actions.
We develop the first adversarial attack on Go AIs that can efficiently search for adversarial states by strategically reducing the search space.
arXiv Detail & Related papers (2022-11-07T18:43:25Z) - DanZero: Mastering GuanDan Game with Reinforcement Learning [121.93690719186412]
Card game AI has always been a hot topic in the research of artificial intelligence.
In this paper, we are devoted to developing an AI program for a more complex card game, GuanDan.
We propose the first AI program DanZero for GuanDan using reinforcement learning technique.
arXiv Detail & Related papers (2022-10-31T06:29:08Z) - AI in Games: Techniques, Challenges and Opportunities [40.86375378643978]
Various game AI systems (AIs) have been developed such as Libratus, OpenAI Five and AlphaStar, beating professional human players.
In this paper, we survey recent successful game AIs, covering board game AIs, card game AIs, first-person shooting game AIs and real time strategy game AIs.
arXiv Detail & Related papers (2021-11-15T09:35:53Z) - Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork [54.309495231017344]
We argue that AI systems should be trained in a human-centered manner, directly optimized for team performance.
We study this proposal for a specific type of human-AI teaming, where the human overseer chooses to either accept the AI recommendation or solve the task themselves.
Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the most accuracy AI may not lead to highest team performance.
arXiv Detail & Related papers (2020-04-27T19:06:28Z) - Suphx: Mastering Mahjong with Deep Reinforcement Learning [114.68233321904623]
We design an AI for Mahjong, named Suphx, based on deep reinforcement learning with some newly introduced techniques.
Suphx has demonstrated stronger performance than most top human players in terms of stable rank.
This is the first time that a computer program outperforms most top human players in Mahjong.
arXiv Detail & Related papers (2020-03-30T16:18:16Z) - From Chess and Atari to StarCraft and Beyond: How Game AI is Driving the
World of AI [10.80914659291096]
Game AI has established itself as a research area for developing and testing the most advanced forms of AI algorithms.
Advances in Game AI are starting to be extended to areas outside of games, such as robotics or the synthesis of chemicals.
arXiv Detail & Related papers (2020-02-24T18:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.