Multi-AI competing and winning against humans in iterated
Rock-Paper-Scissors game
- URL: http://arxiv.org/abs/2003.06769v2
- Date: Mon, 23 Nov 2020 04:57:39 GMT
- Title: Multi-AI competing and winning against humans in iterated
Rock-Paper-Scissors game
- Authors: Lei Wang, Wenbin Huang, Yuanpeng Li, Julian Evans, Sailing He
- Abstract summary: We use an AI algorithm based on Markov Models of one fixed memory length to compete against humans in an iterated Rock Paper Scissors game.
We develop an architecture of multi-AI with changeable parameters to adapt to different competition strategies.
Our strategy could win against more than 95% of human opponents.
- Score: 4.2124879433151605
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Predicting and modeling human behavior and finding trends within human
decision-making processes is a major problem of social science. Rock Paper
Scissors (RPS) is the fundamental strategic question in many game theory
problems and real-world competitions. Finding the right approach to beat a
particular human opponent is challenging. Here we use an AI (artificial
intelligence) algorithm based on Markov Models of one fixed memory length
(abbreviated as "single AI") to compete against humans in an iterated RPS game.
We model and predict human competition behavior by combining many Markov Models
with different fixed memory lengths (abbreviated as "multi-AI"), and develop an
architecture of multi-AI with changeable parameters to adapt to different
competition strategies. We introduce a parameter called "focus length" (a
positive number such as 5 or 10) to control the speed and sensitivity for our
multi-AI to adapt to the opponent's strategy change. The focus length is the
number of previous rounds that the multi-AI should look at when determining
which Single-AI has the best performance and should choose to play for the next
game. We experimented with 52 different people, each playing 300 rounds
continuously against one specific multi-AI model, and demonstrated that our
strategy could win against more than 95% of human opponents.
Related papers
- Toward Human-AI Alignment in Large-Scale Multi-Player Games [24.784173202415687]
We analyze extensive human gameplay data from Xbox's Bleeding Edge (100K+ games)
We find that while human players exhibit variability in fight-flight and explore-exploit behavior, AI players tend towards uniformity.
These stark differences underscore the need for interpretable evaluation, design, and integration of AI in human-aligned applications.
arXiv Detail & Related papers (2024-02-05T22:55:33Z) - DanZero+: Dominating the GuanDan Game through Reinforcement Learning [95.90682269990705]
We develop an AI program for an exceptionally complex and popular card game called GuanDan.
We first put forward an AI program named DanZero for this game.
In order to further enhance the AI's capabilities, we apply policy-based reinforcement learning algorithm to GuanDan.
arXiv Detail & Related papers (2023-12-05T08:07:32Z) - All by Myself: Learning Individualized Competitive Behaviour with a
Contrastive Reinforcement Learning optimization [57.615269148301515]
In a competitive game scenario, a set of agents have to learn decisions that maximize their goals and minimize their adversaries' goals at the same time.
We propose a novel model composed of three neural layers that learn a representation of a competitive game, learn how to map the strategy of specific opponents, and how to disrupt them.
Our experiments demonstrate that our model achieves better performance when playing against offline, online, and competitive-specific models, in particular when playing against the same opponent multiple times.
arXiv Detail & Related papers (2023-10-02T08:11:07Z) - Diversity is Strength: Mastering Football Full Game with Interactive
Reinforcement Learning of Multiple AIs [4.020287169811583]
We propose Diversity is Strength (DIS), a novel DRL training framework that can simultaneously train multiple kinds of AIs.
These AIs are linked through an interconnected history model pool structure, which enhances their capabilities and strategy diversities.
We tested our method in an AI competition based on Google Research Football (GRF) and won the 5v5 and 11v11 tracks.
arXiv Detail & Related papers (2023-06-28T03:56:57Z) - Diversity-based Deep Reinforcement Learning Towards Multidimensional
Difficulty for Fighting Game AI [0.9645196221785693]
We introduce a diversity-based deep reinforcement learning approach for generating a set of agents of similar difficulty.
We find this approach outperforms a baseline trained with specialized, human-authored reward functions in both diversity and performance.
arXiv Detail & Related papers (2022-11-04T21:49:52Z) - WinoGAViL: Gamified Association Benchmark to Challenge
Vision-and-Language Models [91.92346150646007]
In this work, we introduce WinoGAViL: an online game to collect vision-and-language associations.
We use the game to collect 3.5K instances, finding that they are intuitive for humans but challenging for state-of-the-art AI models.
Our analysis as well as the feedback we collect from players indicate that the collected associations require diverse reasoning skills.
arXiv Detail & Related papers (2022-07-25T23:57:44Z) - AI in Games: Techniques, Challenges and Opportunities [40.86375378643978]
Various game AI systems (AIs) have been developed such as Libratus, OpenAI Five and AlphaStar, beating professional human players.
In this paper, we survey recent successful game AIs, covering board game AIs, card game AIs, first-person shooting game AIs and real time strategy game AIs.
arXiv Detail & Related papers (2021-11-15T09:35:53Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - Learning to Play Sequential Games versus Unknown Opponents [93.8672371143881]
We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.
We propose a novel algorithm for the learner when playing against an adversarial sequence of opponents.
Our results include algorithm's regret guarantees that depend on the regularity of the opponent's response.
arXiv Detail & Related papers (2020-07-10T09:33:05Z) - Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork [54.309495231017344]
We argue that AI systems should be trained in a human-centered manner, directly optimized for team performance.
We study this proposal for a specific type of human-AI teaming, where the human overseer chooses to either accept the AI recommendation or solve the task themselves.
Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the most accuracy AI may not lead to highest team performance.
arXiv Detail & Related papers (2020-04-27T19:06:28Z) - Enhanced Rolling Horizon Evolution Algorithm with Opponent Model
Learning: Results for the Fighting Game AI Competition [9.75720700239984]
We propose a novel algorithm that combines Rolling Horizon Evolution Algorithm (RHEA) with opponent model learning.
Our proposed bot with the policy-gradient-based opponent model is the only one without using Monte-Carlo Tree Search (MCTS) among top five bots in the 2019 competition.
arXiv Detail & Related papers (2020-03-31T04:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.