Incorporating Rivalry in Reinforcement Learning for a Competitive Game
- URL: http://arxiv.org/abs/2208.10327v1
- Date: Mon, 22 Aug 2022 14:06:06 GMT
- Title: Incorporating Rivalry in Reinforcement Learning for a Competitive Game
- Authors: Pablo Barros, Ozge Nilay Yalc{\i}n, Ana Tanevska, Alessandra Sciutti
- Abstract summary: This work proposes a novel reinforcement learning mechanism based on the social impact of rivalry behavior.
Our proposed model aggregates objective and social perception mechanisms to derive a rivalry score that is used to modulate the learning of artificial agents.
- Score: 65.2200847818153
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent advances in reinforcement learning with social agents have allowed
such models to achieve human-level performance on specific interaction tasks.
However, most interactive scenarios do not have a version alone as an end goal;
instead, the social impact of these agents when interacting with humans is as
important and largely unexplored. In this regard, this work proposes a novel
reinforcement learning mechanism based on the social impact of rivalry
behavior. Our proposed model aggregates objective and social perception
mechanisms to derive a rivalry score that is used to modulate the learning of
artificial agents. To investigate our proposed model, we design an interactive
game scenario, using the Chef's Hat Card Game, and examine how the rivalry
modulation changes the agent's playing style, and how this impacts the
experience of human players in the game. Our results show that humans can
detect specific social characteristics when playing against rival agents when
compared to common agents, which directly affects the performance of the human
players in subsequent games. We conclude our work by discussing how the
different social and objective features that compose the artificial rivalry
score contribute to our results.
Related papers
- Reciprocal Reward Influence Encourages Cooperation From Self-Interested Agents [2.1301560294088318]
Cooperation between self-interested individuals is a widespread phenomenon in the natural world, but remains elusive in interactions between artificially intelligent agents.
We introduce Reciprocators, reinforcement learning agents which are intrinsically motivated to reciprocate the influence of opponents' actions on their returns.
We show that Reciprocators can be used to promote cooperation in temporally extended social dilemmas during simultaneous learning.
arXiv Detail & Related papers (2024-06-03T06:07:27Z) - SocialBench: Sociality Evaluation of Role-Playing Conversational Agents [85.6641890712617]
Large language models (LLMs) have advanced the development of various AI conversational agents.
SocialBench is the first benchmark designed to evaluate the sociality of role-playing conversational agents at both individual and group levels.
We find that agents excelling in individual level does not imply their proficiency in group level.
arXiv Detail & Related papers (2024-03-20T15:38:36Z) - SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents [107.4138224020773]
We present SOTOPIA, an open-ended environment to simulate complex social interactions between artificial agents and humans.
In our environment, agents role-play and interact under a wide variety of scenarios; they coordinate, collaborate, exchange, and compete with each other to achieve complex social goals.
We find that GPT-4 achieves a significantly lower goal completion rate than humans and struggles to exhibit social commonsense reasoning and strategic communication skills.
arXiv Detail & Related papers (2023-10-18T02:27:01Z) - Aligning to Social Norms and Values in Interactive Narratives [89.82264844526333]
We focus on creating agents that act in alignment with socially beneficial norms and values in interactive narratives or text-based games.
We introduce the GALAD agent that uses the social commonsense knowledge present in specially trained language models to contextually restrict its action space to only those actions that are aligned with socially beneficial values.
arXiv Detail & Related papers (2022-05-04T09:54:33Z) - Warmth and competence in human-agent cooperation [0.7237068561453082]
Recent studies demonstrate that AI agents trained with deep reinforcement learning are capable of collaborating with humans.
We train deep reinforcement learning agents in Coins, a two-player social dilemma.
Participants' perceptions of warmth and competence predict their stated preferences for different agents.
arXiv Detail & Related papers (2022-01-31T18:57:08Z) - Incorporating Rivalry in Reinforcement Learning for a Competitive Game [65.2200847818153]
This study focuses on providing a novel learning mechanism based on a rivalry social impact.
Based on the concept of competitive rivalry, our analysis aims to investigate if we can change the assessment of these agents from a human perspective.
arXiv Detail & Related papers (2020-11-02T21:54:18Z) - Moody Learners -- Explaining Competitive Behaviour of Reinforcement
Learning Agents [65.2200847818153]
In a competitive scenario, the agent does not only have a dynamic environment but also is directly affected by the opponents' actions.
Observing the Q-values of the agent is usually a way of explaining its behavior, however, do not show the temporal-relation between the selected actions.
arXiv Detail & Related papers (2020-07-30T11:30:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.