Learning from Learners: Adapting Reinforcement Learning Agents to be
Competitive in a Card Game
- URL: http://arxiv.org/abs/2004.04000v1
- Date: Wed, 8 Apr 2020 14:11:05 GMT
- Title: Learning from Learners: Adapting Reinforcement Learning Agents to be
Competitive in a Card Game
- Authors: Pablo Barros, Ana Tanevska, Alessandra Sciutti
- Abstract summary: We present a study on how popular reinforcement learning algorithms can be adapted to learn and to play a real-world implementation of a competitive multiplayer card game.
We propose specific training and validation routines for the learning agents, in order to evaluate how the agents learn to be competitive and explain how they adapt to each others' playing style.
- Score: 71.24825724518847
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Learning how to adapt to complex and dynamic environments is one of the most
important factors that contribute to our intelligence. Endowing artificial
agents with this ability is not a simple task, particularly in competitive
scenarios. In this paper, we present a broad study on how popular reinforcement
learning algorithms can be adapted and implemented to learn and to play a
real-world implementation of a competitive multiplayer card game. We propose
specific training and validation routines for the learning agents, in order to
evaluate how the agents learn to be competitive and explain how they adapt to
each others' playing style. Finally, we pinpoint how the behavior of each agent
derives from their learning style and create a baseline for future research on
this scenario.
Related papers
- Fast Peer Adaptation with Context-aware Exploration [63.08444527039578]
We propose a peer identification reward for learning agents in multi-agent games.
This reward motivates the agent to learn a context-aware policy for effective exploration and fast adaptation.
We evaluate our method on diverse testbeds that involve competitive (Kuhn Poker), cooperative (PO-Overcooked), or mixed (Predator-Prey-W) games with peer agents.
arXiv Detail & Related papers (2024-02-04T13:02:27Z) - Curriculum Learning for Cooperation in Multi-Agent Reinforcement
Learning [7.336421550829047]
In a competitive setting, a learning agent can be trained by making it compete with a curriculum of increasingly skilled opponents.
A general intelligent agent should also be able to learn to act around other agents and cooperate with them to achieve common goals.
In this paper, we aim to answer the question of what kind of cooperative teammate, and a curriculum of teammates should a learning agent be trained with to achieve these two objectives.
arXiv Detail & Related papers (2023-12-19T00:59:16Z) - Peer Learning: Learning Complex Policies in Groups from Scratch via Action Recommendations [16.073203911932872]
Peer learning is a novel high-level reinforcement learning framework for agents learning in groups.
We show that peer learning is able to outperform single agent learning and the baseline in several challenging OpenAI Gym domains.
arXiv Detail & Related papers (2023-12-15T17:01:35Z) - All by Myself: Learning Individualized Competitive Behaviour with a
Contrastive Reinforcement Learning optimization [57.615269148301515]
In a competitive game scenario, a set of agents have to learn decisions that maximize their goals and minimize their adversaries' goals at the same time.
We propose a novel model composed of three neural layers that learn a representation of a competitive game, learn how to map the strategy of specific opponents, and how to disrupt them.
Our experiments demonstrate that our model achieves better performance when playing against offline, online, and competitive-specific models, in particular when playing against the same opponent multiple times.
arXiv Detail & Related papers (2023-10-02T08:11:07Z) - Learning to Play Text-based Adventure Games with Maximum Entropy
Reinforcement Learning [4.698846136465861]
We adapt the soft-actor-critic (SAC) algorithm to the text-based environment.
We show that the reward shaping technique helps the agent to learn the policy faster and achieve higher scores.
arXiv Detail & Related papers (2023-02-21T15:16:12Z) - Conditional Imitation Learning for Multi-Agent Games [89.897635970366]
We study the problem of conditional multi-agent imitation learning, where we have access to joint trajectory demonstrations at training time.
We propose a novel approach to address the difficulties of scalability and data scarcity.
Our model learns a low-rank subspace over ego and partner agent strategies, then infers and adapts to a new partner strategy by interpolating in the subspace.
arXiv Detail & Related papers (2022-01-05T04:40:13Z) - Policy Fusion for Adaptive and Customizable Reinforcement Learning
Agents [137.86426963572214]
We show how to combine distinct behavioral policies to obtain a meaningful "fusion" policy.
We propose four different policy fusion methods for combining pre-trained policies.
We provide several practical examples and use-cases for how these methods are indeed useful for video game production and designers.
arXiv Detail & Related papers (2021-04-21T16:08:44Z) - Learning Monopoly Gameplay: A Hybrid Model-Free Deep Reinforcement
Learning and Imitation Learning Approach [31.066718635447746]
Reinforcement Learning (RL) relies on an agent interacting with an environment to maximize the cumulative sum of rewards received by it.
In multi-player Monopoly game, players have to make several decisions every turn which involves complex actions, such as making trades.
This paper introduces a Hybrid Model-Free Deep RL (DRL) approach that is capable of playing and learning winning strategies of Monopoly.
arXiv Detail & Related papers (2021-03-01T01:40:02Z) - Incorporating Rivalry in Reinforcement Learning for a Competitive Game [65.2200847818153]
This study focuses on providing a novel learning mechanism based on a rivalry social impact.
Based on the concept of competitive rivalry, our analysis aims to investigate if we can change the assessment of these agents from a human perspective.
arXiv Detail & Related papers (2020-11-02T21:54:18Z) - Natural Emergence of Heterogeneous Strategies in Artificially
Intelligent Competitive Teams [0.0]
We develop a competitive multi agent environment called FortAttack in which two teams compete against each other.
We observe a natural emergence of heterogeneous behavior amongst homogeneous agents when such behavior can lead to the team's success.
We propose ensemble training, in which we utilize the evolved opponent strategies to train a single policy for friendly agents.
arXiv Detail & Related papers (2020-07-06T22:35:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.