Incorporating Rivalry in Reinforcement Learning for a Competitive Game
- URL: http://arxiv.org/abs/2011.01337v1
- Date: Mon, 2 Nov 2020 21:54:18 GMT
- Title: Incorporating Rivalry in Reinforcement Learning for a Competitive Game
- Authors: Pablo Barros, Ana Tanevska, Ozge Yalcin, Alessandra Sciutti
- Abstract summary: This study focuses on providing a novel learning mechanism based on a rivalry social impact.
Based on the concept of competitive rivalry, our analysis aims to investigate if we can change the assessment of these agents from a human perspective.
- Score: 65.2200847818153
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent advances in reinforcement learning with social agents have allowed us
to achieve human-level performance on some interaction tasks. However, most
interactive scenarios do not have as end-goal performance alone; instead, the
social impact of these agents when interacting with humans is as important and,
in most cases, never explored properly. This preregistration study focuses on
providing a novel learning mechanism based on a rivalry social impact. Our
scenario explored different reinforcement learning-based agents playing a
competitive card game against human players. Based on the concept of
competitive rivalry, our analysis aims to investigate if we can change the
assessment of these agents from a human perspective.
Related papers
- Reciprocal Reward Influence Encourages Cooperation From Self-Interested Agents [2.1301560294088318]
Cooperation between self-interested individuals is a widespread phenomenon in the natural world, but remains elusive in interactions between artificially intelligent agents.
We introduce Reciprocators, reinforcement learning agents which are intrinsically motivated to reciprocate the influence of opponents' actions on their returns.
We show that Reciprocators can be used to promote cooperation in temporally extended social dilemmas during simultaneous learning.
arXiv Detail & Related papers (2024-06-03T06:07:27Z) - Sharing the Cost of Success: A Game for Evaluating and Learning Collaborative Multi-Agent Instruction Giving and Following Policies [19.82683688911297]
We propose a challenging interactive reference game that requires two players to coordinate on vision and language observations.
We show that a standard Proximal Policy Optimization (PPO) setup achieves a high success rate when bootstrapped with partner behaviors.
We find that a pairing of neural partners indeed reduces the measured joint effort when playing together repeatedly.
arXiv Detail & Related papers (2024-03-26T08:58:28Z) - SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents [107.4138224020773]
We present SOTOPIA, an open-ended environment to simulate complex social interactions between artificial agents and humans.
In our environment, agents role-play and interact under a wide variety of scenarios; they coordinate, collaborate, exchange, and compete with each other to achieve complex social goals.
We find that GPT-4 achieves a significantly lower goal completion rate than humans and struggles to exhibit social commonsense reasoning and strategic communication skills.
arXiv Detail & Related papers (2023-10-18T02:27:01Z) - Incorporating Rivalry in Reinforcement Learning for a Competitive Game [65.2200847818153]
This work proposes a novel reinforcement learning mechanism based on the social impact of rivalry behavior.
Our proposed model aggregates objective and social perception mechanisms to derive a rivalry score that is used to modulate the learning of artificial agents.
arXiv Detail & Related papers (2022-08-22T14:06:06Z) - Warmth and competence in human-agent cooperation [0.7237068561453082]
Recent studies demonstrate that AI agents trained with deep reinforcement learning are capable of collaborating with humans.
We train deep reinforcement learning agents in Coins, a two-player social dilemma.
Participants' perceptions of warmth and competence predict their stated preferences for different agents.
arXiv Detail & Related papers (2022-01-31T18:57:08Z) - Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria [57.74495091445414]
Social deduction games offer an avenue to study how individuals might learn to synthesize potentially unreliable information about others.
In this work, we present Hidden Agenda, a two-team social deduction game that provides a 2D environment for studying learning agents in scenarios of unknown team alignment.
Reinforcement learning agents trained in Hidden Agenda show that agents can learn a variety of behaviors, including partnering and voting without need for communication in natural language.
arXiv Detail & Related papers (2022-01-05T20:54:10Z) - Moody Learners -- Explaining Competitive Behaviour of Reinforcement
Learning Agents [65.2200847818153]
In a competitive scenario, the agent does not only have a dynamic environment but also is directly affected by the opponents' actions.
Observing the Q-values of the agent is usually a way of explaining its behavior, however, do not show the temporal-relation between the selected actions.
arXiv Detail & Related papers (2020-07-30T11:30:42Z) - Learning from Learners: Adapting Reinforcement Learning Agents to be
Competitive in a Card Game [71.24825724518847]
We present a study on how popular reinforcement learning algorithms can be adapted to learn and to play a real-world implementation of a competitive multiplayer card game.
We propose specific training and validation routines for the learning agents, in order to evaluate how the agents learn to be competitive and explain how they adapt to each others' playing style.
arXiv Detail & Related papers (2020-04-08T14:11:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.