Learning Monopoly Gameplay: A Hybrid Model-Free Deep Reinforcement
Learning and Imitation Learning Approach
- URL: http://arxiv.org/abs/2103.00683v1
- Date: Mon, 1 Mar 2021 01:40:02 GMT
- Title: Learning Monopoly Gameplay: A Hybrid Model-Free Deep Reinforcement
Learning and Imitation Learning Approach
- Authors: Marina Haliem, Trevor Bonjour, Aala Alsalem, Shilpa Thomas, Hongyu Li,
Vaneet Aggarwal, Bharat Bhargava, and Mayank Kejriwal
- Abstract summary: Reinforcement Learning (RL) relies on an agent interacting with an environment to maximize the cumulative sum of rewards received by it.
In multi-player Monopoly game, players have to make several decisions every turn which involves complex actions, such as making trades.
This paper introduces a Hybrid Model-Free Deep RL (DRL) approach that is capable of playing and learning winning strategies of Monopoly.
- Score: 31.066718635447746
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning how to adapt and make real-time informed decisions in dynamic and
complex environments is a challenging problem. To learn this task,
Reinforcement Learning (RL) relies on an agent interacting with an environment
and learning through trial and error to maximize the cumulative sum of rewards
received by it. In multi-player Monopoly game, players have to make several
decisions every turn which involves complex actions, such as making trades.
This makes the decision-making harder and thus, introduces a highly complicated
task for an RL agent to play and learn its winning strategies. In this paper,
we introduce a Hybrid Model-Free Deep RL (DRL) approach that is capable of
playing and learning winning strategies of the popular board game, Monopoly. To
achieve this, our DRL agent (1) starts its learning process by imitating a
rule-based agent (that resembles the human logic) to initialize its policy, (2)
learns the successful actions, and improves its policy using DRL. Experimental
results demonstrate an intelligent behavior of our proposed agent as it shows
high win rates against different types of agent-players.
Related papers
- Two-Step Reinforcement Learning for Multistage Strategy Card Game [0.0]
This study introduces a two-step reinforcement learning (RL) strategy tailored for "The Lord of the Rings: The Card Game (LOTRCG)"
This research diverges from conventional RL methods by adopting a phased learning approach.
The paper also explores a multi-agent system, where distinct RL agents are employed for various decision-making aspects of the game.
arXiv Detail & Related papers (2023-11-29T01:31:21Z) - ALYMPICS: LLM Agents Meet Game Theory -- Exploring Strategic
Decision-Making with AI Agents [77.34720446306419]
Alympics is a systematic simulation framework utilizing Large Language Model (LLM) agents for game theory research.
Alympics creates a versatile platform for studying complex game theory problems.
arXiv Detail & Related papers (2023-11-06T16:03:46Z) - All by Myself: Learning Individualized Competitive Behaviour with a
Contrastive Reinforcement Learning optimization [57.615269148301515]
In a competitive game scenario, a set of agents have to learn decisions that maximize their goals and minimize their adversaries' goals at the same time.
We propose a novel model composed of three neural layers that learn a representation of a competitive game, learn how to map the strategy of specific opponents, and how to disrupt them.
Our experiments demonstrate that our model achieves better performance when playing against offline, online, and competitive-specific models, in particular when playing against the same opponent multiple times.
arXiv Detail & Related papers (2023-10-02T08:11:07Z) - Generating Personas for Games with Multimodal Adversarial Imitation
Learning [47.70823327747952]
Reinforcement learning has been widely successful in producing agents capable of playing games at a human level.
Going beyond reinforcement learning is necessary to model a wide range of human playstyles.
This paper presents a novel imitation learning approach to generate multiple persona policies for playtesting.
arXiv Detail & Related papers (2023-08-15T06:58:19Z) - Centralized control for multi-agent RL in a complex Real-Time-Strategy
game [0.0]
Multi-agent Reinforcement learning (MARL) studies the behaviour of multiple learning agents that coexist in a shared environment.
MARL is more challenging than single-agent RL because it involves more complex learning dynamics.
This project provides the end-to-end experience of applying RL in the Lux AI v2 Kaggle competition.
arXiv Detail & Related papers (2023-04-25T17:19:05Z) - Mastering the Game of No-Press Diplomacy via Human-Regularized
Reinforcement Learning and Planning [95.78031053296513]
No-press Diplomacy is a complex strategy game involving both cooperation and competition.
We introduce a planning algorithm we call DiL-piKL that regularizes a reward-maximizing policy toward a human imitation-learned policy.
We show that DiL-piKL can be extended into a self-play reinforcement learning algorithm we call RL-DiL-piKL.
arXiv Detail & Related papers (2022-10-11T14:47:35Z) - Reinforcement Learning Agents in Colonel Blotto [0.0]
We focus on a specific instance of agent-based models, which uses reinforcement learning (RL) to train the agent how to act in its environment.
We find that the RL agent handily beats a single opponent, and still performs quite well when the number of opponents are increased.
We also analyze the RL agent and look at what strategies it has arrived by looking at the actions that it has given the highest and lowest Q-values.
arXiv Detail & Related papers (2022-04-04T16:18:01Z) - Explore and Control with Adversarial Surprise [78.41972292110967]
Reinforcement learning (RL) provides a framework for learning goal-directed policies given user-specified rewards.
We propose a new unsupervised RL technique based on an adversarial game which pits two policies against each other to compete over the amount of surprise an RL agent experiences.
We show that our method leads to the emergence of complex skills by exhibiting clear phase transitions.
arXiv Detail & Related papers (2021-07-12T17:58:40Z) - Learning to Play No-Press Diplomacy with Best Response Policy Iteration [31.367850729299665]
We apply deep reinforcement learning methods to Diplomacy, a 7-player board game.
We show that our agents convincingly outperform the previous state-of-the-art, and game theoretic equilibrium analysis shows that the new process yields consistent improvements.
arXiv Detail & Related papers (2020-06-08T14:33:31Z) - Learning from Learners: Adapting Reinforcement Learning Agents to be
Competitive in a Card Game [71.24825724518847]
We present a study on how popular reinforcement learning algorithms can be adapted to learn and to play a real-world implementation of a competitive multiplayer card game.
We propose specific training and validation routines for the learning agents, in order to evaluate how the agents learn to be competitive and explain how they adapt to each others' playing style.
arXiv Detail & Related papers (2020-04-08T14:11:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.