Putting the Con in Context: Identifying Deceptive Actors in the Game of
Mafia
- URL: http://arxiv.org/abs/2207.02253v1
- Date: Tue, 5 Jul 2022 18:29:27 GMT
- Title: Putting the Con in Context: Identifying Deceptive Actors in the Game of
Mafia
- Authors: Samee Ibraheem, Gaoyue Zhou, and John DeNero
- Abstract summary: We analyze the effect of speaker role on language use through the game of Mafia.
We show that classification models are able to rank deceptive players as more suspicious than honest ones.
We present methods for using our trained models to identify features that distinguish between player roles.
- Score: 4.215251065887862
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While neural networks demonstrate a remarkable ability to model linguistic
content, capturing contextual information related to a speaker's conversational
role is an open area of research. In this work, we analyze the effect of
speaker role on language use through the game of Mafia, in which participants
are assigned either an honest or a deceptive role. In addition to building a
framework to collect a dataset of Mafia game records, we demonstrate that there
are differences in the language produced by players with different roles. We
confirm that classification models are able to rank deceptive players as more
suspicious than honest ones based only on their use of language. Furthermore,
we show that training models on two auxiliary tasks outperforms a standard
BERT-based text classification approach. We also present methods for using our
trained models to identify features that distinguish between player roles,
which could be used to assist players during the Mafia game.
Related papers
- Understanding Players as if They Are Talking to the Game in a Customized Language: A Pilot Study [3.4333699338998693]
This pilot study explores the application of language models (LMs) to model game event sequences.
We transform raw event data into textual sequences and pretraining a Longformer model on this data.
The results demonstrate the potential of self-supervised LMs in enhancing game design and personalization without relying on ground-truth labels.
arXiv Detail & Related papers (2024-10-24T09:59:10Z) - What if Red Can Talk? Dynamic Dialogue Generation Using Large Language Models [0.0]
We introduce a dialogue filler framework that utilizes large language models (LLMs) to generate dynamic and contextually appropriate character interactions.
We test this framework within the environments of Final Fantasy VII Remake and Pokemon.
This study aims to assist developers in crafting more nuanced filler dialogues, thereby enriching player immersion and enhancing the overall RPG experience.
arXiv Detail & Related papers (2024-07-29T19:12:18Z) - Capturing Minds, Not Just Words: Enhancing Role-Playing Language Models with Personality-Indicative Data [58.92110996840019]
We propose to enhance role-playing language models (RPLMs) via personality-indicative data.
Specifically, we leverage questions from psychological scales and distill advanced RPAs to generate dialogues that grasp the minds of characters.
Experimental results validate that RPLMs trained with our dataset exhibit advanced role-playing capabilities for both general and personality-related evaluations.
arXiv Detail & Related papers (2024-06-27T06:24:00Z) - Large Language Models are Superpositions of All Characters: Attaining
Arbitrary Role-play via Self-Alignment [62.898963074989766]
We introduce Ditto, a self-alignment method for role-play.
This method creates a role-play training set comprising 4,000 characters, surpassing the scale of currently available datasets by tenfold.
We present the first comprehensive cross-supervision alignment experiment in the role-play domain.
arXiv Detail & Related papers (2024-01-23T03:56:22Z) - Modeling Cross-Cultural Pragmatic Inference with Codenames Duet [40.52354928048333]
This paper introduces the Cultural Codes dataset, which operationalizes sociocultural pragmatic inference in a simple word reference game.
Our dataset consists of 794 games with 7,703 turns, distributed across 153 unique players.
Our experiments show that accounting for background characteristics significantly improves model performance for tasks related to clue giving and guessing.
arXiv Detail & Related papers (2023-06-04T20:47:07Z) - About latent roles in forecasting players in team sports [47.066729480128856]
Team sports contain a significant social component that influences interactions between teammates and opponents.
We create RolFor, a novel end-to-end model for Role-based Forecasting.
arXiv Detail & Related papers (2023-04-17T13:33:23Z) - Werewolf Among Us: A Multimodal Dataset for Modeling Persuasion
Behaviors in Social Deduction Games [45.55448048482881]
We introduce the first multimodal dataset for modeling persuasion behaviors.
Our dataset includes 199 dialogue transcriptions and videos, 26,647 utterance level annotations of persuasion strategy, and game level annotations of deduction game outcomes.
arXiv Detail & Related papers (2022-12-16T04:52:53Z) - Probing Task-Oriented Dialogue Representation from Language Models [106.02947285212132]
This paper investigates pre-trained language models to find out which model intrinsically carries the most informative representation for task-oriented dialogue tasks.
We fine-tune a feed-forward layer as the classifier probe on top of a fixed pre-trained language model with annotated labels in a supervised way.
arXiv Detail & Related papers (2020-10-26T21:34:39Z) - Filling the Gap of Utterance-aware and Speaker-aware Representation for
Multi-turn Dialogue [76.88174667929665]
A multi-turn dialogue is composed of multiple utterances from two or more different speaker roles.
In the existing retrieval-based multi-turn dialogue modeling, the pre-trained language models (PrLMs) as encoder represent the dialogues coarsely.
We propose a novel model to fill such a gap by modeling the effective utterance-aware and speaker-aware representations entailed in a dialogue history.
arXiv Detail & Related papers (2020-09-14T15:07:19Z) - Disentangled Speech Embeddings using Cross-modal Self-supervision [119.94362407747437]
We develop a self-supervised learning objective that exploits the natural cross-modal synchrony between faces and audio in video.
We construct a two-stream architecture which: (1) shares low-level features common to both representations; and (2) provides a natural mechanism for explicitly disentangling these factors.
arXiv Detail & Related papers (2020-02-20T14:13:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.