A Novel Weighted Ensemble Learning Based Agent for the Werewolf Game
- URL: http://arxiv.org/abs/2205.09813v1
- Date: Thu, 19 May 2022 19:19:29 GMT
- Title: A Novel Weighted Ensemble Learning Based Agent for the Werewolf Game
- Authors: Mohiuddeen Khan, Claus Aranha
- Abstract summary: Werewolf is a popular party game throughout the world, and research on its significance has progressed in recent years.
In this research, we generated a sophisticated agent to play the Werewolf game using a complex weighted ensemble learning approach.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Werewolf is a popular party game throughout the world, and research on its
significance has progressed in recent years. The Werewolf game is based on
conversation, and in order to win, participants must use all of their cognitive
abilities. This communication game requires the playing agents to be very
sophisticated to win. In this research, we generated a sophisticated agent to
play the Werewolf game using a complex weighted ensemble learning approach.
This research work aimed to estimate what other agents/players think of us in
the game. The agent was developed by aggregating strategies of different
participants in the AI Wolf competition and thereby learning from them using
machine learning. Moreover, the agent created was able to perform much better
than other competitors using very basic strategies to show the approach's
effectiveness in the Werewolf game. The machine learning technique used here is
not restricted to the Werewolf game but may be extended to any game that
requires communication and action depending on other participants.
Related papers
- Learning to Discuss Strategically: A Case Study on One Night Ultimate Werewolf [28.57358844115881]
One Night Ultimate Werewolf (ONUW) requires players to develop strategic discussion policies.
We propose an RL-instructed language agent framework, where a discussion policy trained by reinforcement learning (RL) is employed.
Our experimental results on several ONUW game settings demonstrate the effectiveness and generalizability of our proposed framework.
arXiv Detail & Related papers (2024-05-30T11:07:06Z) - Neural Population Learning beyond Symmetric Zero-sum Games [52.20454809055356]
We introduce NeuPL-JPSRO, a neural population learning algorithm that benefits from transfer learning of skills and converges to a Coarse Correlated (CCE) of the game.
Our work shows that equilibrium convergent population learning can be implemented at scale and in generality.
arXiv Detail & Related papers (2024-01-10T12:56:24Z) - ALYMPICS: LLM Agents Meet Game Theory -- Exploring Strategic
Decision-Making with AI Agents [77.34720446306419]
Alympics is a systematic simulation framework utilizing Large Language Model (LLM) agents for game theory research.
Alympics creates a versatile platform for studying complex game theory problems.
arXiv Detail & Related papers (2023-11-06T16:03:46Z) - Language Agents with Reinforcement Learning for Strategic Play in the
Werewolf Game [40.438765131992525]
We develop strategic language agents that generate flexible language actions and possess strong decision-making abilities.
To mitigate the intrinsic bias in language actions, our agents use an LLM to perform deductive reasoning and generate a diverse set of action candidates.
Experiments show that our agents overcome the intrinsic bias and outperform existing LLM-based agents in the Werewolf game.
arXiv Detail & Related papers (2023-10-29T09:02:57Z) - Playing the Werewolf game with artificial intelligence for language
understanding [0.7550566004119156]
Werewolf is a social deduction game based on free natural language communication.
The purpose of this study is to develop an AI agent that can play Werewolf through natural language conversations.
arXiv Detail & Related papers (2023-02-21T13:03:20Z) - Mastering the Game of No-Press Diplomacy via Human-Regularized
Reinforcement Learning and Planning [95.78031053296513]
No-press Diplomacy is a complex strategy game involving both cooperation and competition.
We introduce a planning algorithm we call DiL-piKL that regularizes a reward-maximizing policy toward a human imitation-learned policy.
We show that DiL-piKL can be extended into a self-play reinforcement learning algorithm we call RL-DiL-piKL.
arXiv Detail & Related papers (2022-10-11T14:47:35Z) - Solving Royal Game of Ur Using Reinforcement Learning [0.0]
We train our agents using different methods namely Monte Carlo, Qlearning and Expected Sarsa to learn optimal policy to play the strategic Royal Game of Ur.
Although it is hard to conclude that when trained with limited resources which algorithm performs better overall, but Expected Sarsa shows promising results when it comes to fastest learning.
arXiv Detail & Related papers (2022-08-23T01:26:37Z) - Learning Monopoly Gameplay: A Hybrid Model-Free Deep Reinforcement
Learning and Imitation Learning Approach [31.066718635447746]
Reinforcement Learning (RL) relies on an agent interacting with an environment to maximize the cumulative sum of rewards received by it.
In multi-player Monopoly game, players have to make several decisions every turn which involves complex actions, such as making trades.
This paper introduces a Hybrid Model-Free Deep RL (DRL) approach that is capable of playing and learning winning strategies of Monopoly.
arXiv Detail & Related papers (2021-03-01T01:40:02Z) - Learning to Play Imperfect-Information Games by Imitating an Oracle
Planner [77.67437357688316]
We consider learning to play multiplayer imperfect-information games with simultaneous moves and large state-action spaces.
Our approach is based on model-based planning.
We show that the planner is able to discover efficient playing strategies in the games of Clash Royale and Pommerman.
arXiv Detail & Related papers (2020-12-22T17:29:57Z) - Learning to Play Sequential Games versus Unknown Opponents [93.8672371143881]
We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.
We propose a novel algorithm for the learner when playing against an adversarial sequence of opponents.
Our results include algorithm's regret guarantees that depend on the regularity of the opponent's response.
arXiv Detail & Related papers (2020-07-10T09:33:05Z) - Learning from Learners: Adapting Reinforcement Learning Agents to be
Competitive in a Card Game [71.24825724518847]
We present a study on how popular reinforcement learning algorithms can be adapted to learn and to play a real-world implementation of a competitive multiplayer card game.
We propose specific training and validation routines for the learning agents, in order to evaluate how the agents learn to be competitive and explain how they adapt to each others' playing style.
arXiv Detail & Related papers (2020-04-08T14:11:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.