Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
- URL: http://arxiv.org/abs/2309.04658v2
- Date: Sat, 11 May 2024 07:08:16 GMT
- Title: Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf
- Authors: Yuzhuang Xu, Shuo Wang, Peng Li, Fuwen Luo, Xiaolong Wang, Weidong Liu, Yang Liu,
- Abstract summary: We propose a tuning-free framework to engage large language models in communication games.
An empirical study on the representative and widely-studied communication game, Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs.
- Score: 19.39740531672788
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Communication games, which we refer to as incomplete information games that heavily depend on natural language communication, hold significant research value in fields such as economics, social science, and artificial intelligence. In this work, we explore the problem of how to engage large language models (LLMs) in communication games, and in response, propose a tuning-free framework. Our approach keeps LLMs frozen, and relies on the retrieval and reflection on past communications and experiences for improvement. An empirical study on the representative and widely-studied communication game, ``Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs. More importantly, strategic behaviors begin to emerge in our experiments, suggesting that it will be a fruitful journey to engage LLMs in communication games and associated domains.
Related papers
- Enhancing Dialogue Generation in Werewolf Game Through Situation Analysis and Persuasion Strategies [1.7725414095035827]
This paper introduces a LLM-based Werewolf Game AI, where each role is supported by situation analysis to aid response generation.
Various persuasion strategies are employed to effectively persuade other players to align with its actions.
arXiv Detail & Related papers (2024-08-29T14:49:13Z) - Werewolf Arena: A Case Study in LLM Evaluation via Social Deduction [3.350801757799469]
Werewolf Arena is a framework for evaluating large language models (LLMs)
In Werewolf Arena, LLMs compete against each other, navigating the game's complex dynamics of deception, deduction, and persuasion.
We demonstrate Werewolf Arena's utility through an arena-style tournament featuring Gemini and GPT models.
arXiv Detail & Related papers (2024-07-18T23:41:05Z) - Learning to Discuss Strategically: A Case Study on One Night Ultimate Werewolf [28.57358844115881]
As a variant of the famous communication game Werewolf, One Night Ultimate Werewolf (ONUW) requires players to develop strategic discussion policies.
We propose an RL-instructed language agent framework, where a discussion policy trained by reinforcement learning (RL) is employed to determine appropriate discussion tactics to adopt.
arXiv Detail & Related papers (2024-05-30T11:07:06Z) - Enhance Reasoning for Large Language Models in the Game Werewolf [15.730860371636336]
This paper presents an innovative framework that integrates Large Language Models (LLMs) with an external Thinker module.
Our framework is presented using a 9-player Werewolf game that demands dual-system reasoning.
Experiments demonstrate the framework's effectiveness in deductive reasoning, speech generation, and online game evaluation.
arXiv Detail & Related papers (2024-02-04T03:47:10Z) - States as Strings as Strategies: Steering Language Models with
Game-Theoretic Solvers [44.64118885012762]
A suitable model of the players, strategies, and payoffs associated with linguistic interactions would enable existing game-theoretic algorithms to provide strategic solutions in the space of language.
We present one possible binding from dialogue to game theory as well as generalizations of existing equilibrium finding algorithms to this setting.
arXiv Detail & Related papers (2024-01-24T22:22:00Z) - Think Before You Speak: Cultivating Communication Skills of Large Language Models via Inner Monologue [73.69510478736483]
Large language models (LLMs) can generate fluent, coherent, and diverse responses.
However, they lack a crucial ability: communication skills.
This article aims to empower LLMs with communication skills through inner monologues.
Experimental results show that the proposed CSIM strategy improves the backbone models and outperforms the baselines.
arXiv Detail & Related papers (2023-11-13T16:19:42Z) - ALYMPICS: LLM Agents Meet Game Theory -- Exploring Strategic
Decision-Making with AI Agents [77.34720446306419]
Alympics is a systematic simulation framework utilizing Large Language Model (LLM) agents for game theory research.
Alympics creates a versatile platform for studying complex game theory problems.
arXiv Detail & Related papers (2023-11-06T16:03:46Z) - Leveraging Word Guessing Games to Assess the Intelligence of Large
Language Models [105.39236338147715]
The paper is inspired by the popular language game Who is Spy''
We develop DEEP to evaluate LLMs' expression and disguising abilities.
We then introduce SpyGame, an interactive multi-agent framework.
arXiv Detail & Related papers (2023-10-31T14:37:42Z) - Let Models Speak Ciphers: Multiagent Debate through Embeddings [84.20336971784495]
We introduce CIPHER (Communicative Inter-Model Protocol Through Embedding Representation) to address this issue.
By deviating from natural language, CIPHER offers an advantage of encoding a broader spectrum of information without any modification to the model weights.
This showcases the superiority and robustness of embeddings as an alternative "language" for communication among LLMs.
arXiv Detail & Related papers (2023-10-10T03:06:38Z) - SPRING: Studying the Paper and Reasoning to Play Games [102.5587155284795]
We propose a novel approach, SPRING, to read the game's original academic paper and use the knowledge learned to reason and play the game through a large language model (LLM)
In experiments, we study the quality of in-context "reasoning" induced by different forms of prompts under the setting of the Crafter open-world environment.
Our experiments suggest that LLMs, when prompted with consistent chain-of-thought, have great potential in completing sophisticated high-level trajectories.
arXiv Detail & Related papers (2023-05-24T18:14:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.