Trust-ya: design of a multiplayer game for the study of small group
processes
- URL: http://arxiv.org/abs/2109.04037v1
- Date: Thu, 9 Sep 2021 05:13:14 GMT
- Title: Trust-ya: design of a multiplayer game for the study of small group
processes
- Authors: Jerry Huang, Joshua Jung, Neil Budnarain, Benn McGregor, Jesse Hoey
- Abstract summary: This paper presents the design of a cooperative multi-player betting game, Trust-ya, as a model of some elements of status processes in human groups.
The game is designed to elicit status-driven leader-follower behaviours as a means to observe and influence social hierarchy.
- Score: 2.029924828197095
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents the design of a cooperative multi-player betting game,
Trust-ya, as a model of some elements of status processes in human groups. The
game is designed to elicit status-driven leader-follower behaviours as a means
to observe and influence social hierarchy. It involves a Bach/Stravinsky game
of deference in a group, in which people on each turn can either invest with
another player or hope someone invests with them. Players who receive
investment capital are able to gamble for payoffs from a central pool which
then can be shared back with those who invested (but a portion of it may be
kept, including all of it). The bigger gambles (people with more investors) get
bigger payoffs. Thus, there is a natural tendency for players to coalesce as
investors around a 'leader' who gambles, but who also shares sufficiently from
their winnings to keep the investors 'hanging on'. The 'leader' will want to
keep as much as possible for themselves, however. The game is played
anonymously, but a set of 'status symbols' can be purchased which have no value
in the game itself, but can serve as a 'cheap talk' communication device with
other players. This paper introduces the game, relates it to status theory in
social psychology, and shows some simple simulated and human experiments that
demonstrate how the game can be used to study status processes and dynamics in
human groups.
Related papers
- Quantum Bayesian Games [0.09208007322096534]
We apply a Bayesian agent-based framework inspired by QBism to iterations of two quantum games, the CHSH game and the quantum prisoners' dilemma.
In each two-player game, players hold beliefs about an amount of shared entanglement and about the actions or beliefs of the other player.
We simulate iterated play to see if and how players can learn about the presence of shared entanglement and to explore how their performance, their beliefs, and the game's structure interrelate.
arXiv Detail & Related papers (2024-08-04T15:15:42Z) - People use fast, goal-directed simulation to reason about novel games [75.25089384921557]
We study how people reason about a range of simple but novel connect-n style board games.
We ask people to judge how fair and how fun the games are from very little experience.
arXiv Detail & Related papers (2024-07-19T07:59:04Z) - Improving Language Model Negotiation with Self-Play and In-Context
Learning from AI Feedback [97.54519989641388]
We study whether multiple large language models (LLMs) can autonomously improve each other in a negotiation game by playing, reflecting, and criticizing.
Only a subset of the language models we consider can self-play and improve the deal price from AI feedback.
arXiv Detail & Related papers (2023-05-17T11:55:32Z) - Towards Understanding Player Behavior in Blockchain Games: A Case Study
of Aavegotchi [5.512004213807026]
We try to see the big picture in a small way to explore and determine the impact of gameplay and financial factors on player behavior in blockchain games.
Our results reveal that the whole game is held up by a small number of players with high-frequent interaction or vast amounts of funds invested.
arXiv Detail & Related papers (2022-10-24T08:01:34Z) - Collusion Detection in Team-Based Multiplayer Games [57.153233321515984]
We propose a system that detects colluding behaviors in team-based multiplayer games.
The proposed method analyzes the players' social relationships paired with their in-game behavioral patterns.
We then automate the detection using Isolation Forest, an unsupervised learning technique specialized in highlighting outliers.
arXiv Detail & Related papers (2022-03-10T02:37:39Z) - Suphx: Mastering Mahjong with Deep Reinforcement Learning [114.68233321904623]
We design an AI for Mahjong, named Suphx, based on deep reinforcement learning with some newly introduced techniques.
Suphx has demonstrated stronger performance than most top human players in terms of stable rank.
This is the first time that a computer program outperforms most top human players in Mahjong.
arXiv Detail & Related papers (2020-03-30T16:18:16Z) - Deep Reinforcement Learning for FlipIt Security Game [2.0624765454705654]
We describe a deep learning model in which agents adapt to different classes of opponents and learn the optimal counter-strategy.
We apply our model to FlipIt, a two-player security game in which both players, the attacker and the defender, compete for ownership of a shared resource.
Our model is a deep neural network combined with Q-learning and is trained to maximize the defender's time of ownership of the resource.
arXiv Detail & Related papers (2020-02-28T18:26:24Z) - Learning to Resolve Alliance Dilemmas in Many-Player Zero-Sum Games [22.38765498549914]
We argue that a systematic study of many-player zero-sum games is a crucial element of artificial intelligence research.
Using symmetric zero-sum matrix games, we demonstrate formally that alliance formation may be seen as a social dilemma.
We show how reinforcement learning may be augmented with a peer-to-peer contract mechanism to discover and enforce alliances.
arXiv Detail & Related papers (2020-02-27T10:32:31Z) - Efficient exploration of zero-sum stochastic games [83.28949556413717]
We investigate the increasingly important and common game-solving setting where we do not have an explicit description of the game but only oracle access to it through gameplay.
During a limited-duration learning phase, the algorithm can control the actions of both players in order to try to learn the game and how to play it well.
Our motivation is to quickly learn strategies that have low exploitability in situations where evaluating the payoffs of a queried strategy profile is costly.
arXiv Detail & Related papers (2020-02-24T20:30:38Z) - Signaling in Bayesian Network Congestion Games: the Subtle Power of
Symmetry [66.82463322411614]
The paper focuses on the problem of optimal ex ante persuasive signaling schemes, showing that symmetry is a crucial property for its solution.
We show that an optimal ex ante persuasive scheme can be computed in time when players are symmetric and have affine cost functions.
arXiv Detail & Related papers (2020-02-12T19:38:15Z) - Inducing Cooperative behaviour in Sequential-Social dilemmas through
Multi-Agent Reinforcement Learning using Status-Quo Loss [16.016452248865132]
In social dilemma situations, individual rationality leads to sub-optimal group outcomes.
Deep Reinforcement Learning agents trained to optimize individual rewards converge to selfish, mutually harmful behavior.
We show how agents trained with SQLoss evolve cooperative behavior in several social dilemma matrix games.
arXiv Detail & Related papers (2020-01-15T18:10:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.