Strategies of cooperation and defection in five large language models
- URL: http://arxiv.org/abs/2601.09849v1
- Date: Wed, 14 Jan 2026 20:13:23 GMT
- Title: Strategies of cooperation and defection in five large language models
- Authors: Saptarshi Pal, Abhishek Mallela, Christian Hilbe, Lenz Pracher, Chiyu Wei, Feng Fu, Santiago Schnell, Martin A Nowak,
- Abstract summary: Large language models (LLMs) are increasingly deployed to support human decision-making.<n>This paper explores whether five leading models produce sensible strategies in the repeated prisoner's dilemma.<n>Our experiments shed light on how current LLMs instantiate reciprocal cooperation.
- Score: 1.0249415982296137
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Large language models (LLMs) are increasingly deployed to support human decision-making. This use of LLMs has concerning implications, especially when their prescriptions affect the welfare of others. To gauge how LLMs make social decisions, we explore whether five leading models produce sensible strategies in the repeated prisoner's dilemma, which is the main metaphor of reciprocal cooperation. First, we measure the propensity of LLMs to cooperate in a neutral setting, without using language reminiscent of how this game is usually presented. We record to what extent LLMs implement Nash equilibria or other well-known strategy classes. Thereafter, we explore how LLMs adapt their strategies to changes in parameter values. We vary the game's continuation probability, the payoff values, and whether the total number of rounds is commonly known. We also study the effect of different framings. In each case, we test whether the adaptations of the LLMs are in line with basic intuition, theoretical predictions of evolutionary game theory, and experimental evidence from human participants. While all LLMs perform well in many of the tasks, none of them exhibit full consistency over all tasks. We also conduct tournaments between the inferred LLM strategies and study direct interaction between LLMs in games over ten rounds with a known or unknown last round. Our experiments shed light on how current LLMs instantiate reciprocal cooperation.
Related papers
- Corrupted by Reasoning: Reasoning Language Models Become Free-Riders in Public Goods Games [87.5673042805229]
How large language models balance self-interest and collective well-being is a critical challenge for ensuring alignment, robustness, and safe deployment.<n>We adapt a public goods game with institutional choice from behavioral economics, allowing us to observe how different LLMs navigate social dilemmas.<n>Surprisingly, we find that reasoning LLMs, such as the o1 series, struggle significantly with cooperation.
arXiv Detail & Related papers (2025-06-29T15:02:47Z) - Scoring with Large Language Models: A Study on Measuring Empathy of Responses in Dialogues [3.2162648244439684]
We develop a framework for investigating how effective Large Language Models are at measuring and scoring empathy of responses in dialogues.<n>Our strategy is to approximate the performance of state-of-the-art and fine-tuned LLMs with explicit and explainable features.<n>Our results show that when only using embeddings, it is possible to achieve performance close to that of generic LLMs.
arXiv Detail & Related papers (2024-12-28T20:37:57Z) - WALL-E: World Alignment by Rule Learning Improves World Model-based LLM Agents [55.64361927346957]
We propose a neurosymbolic approach to learn rules gradient-free through large language models (LLMs)
Our embodied LLM agent "WALL-E" is built upon model-predictive control (MPC)
On open-world challenges in Minecraft and ALFWorld, WALL-E achieves higher success rates than existing methods.
arXiv Detail & Related papers (2024-10-09T23:37:36Z) - LLMs May Not Be Human-Level Players, But They Can Be Testers: Measuring Game Difficulty with LLM Agents [10.632179121247466]
We propose a general game-testing framework using LLM agents and test it on two widely played strategy games: Wordle and Slay the Spire.
Our results reveal an interesting finding: although LLMs may not perform as well as the average human player, their performance, when guided by simple, generic prompting techniques, shows a statistically significant and strong correlation with difficulty indicated by human players.
This suggests that LLMs could serve as effective agents for measuring game difficulty during the development process.
arXiv Detail & Related papers (2024-10-01T18:40:43Z) - Cognitive phantoms in LLMs through the lens of latent variables [0.3441021278275805]
Large language models (LLMs) increasingly reach real-world applications, necessitating a better understanding of their behaviour.
Recent studies administering psychometric questionnaires to LLMs report human-like traits in LLMs, potentially influencing behaviour.
This approach suffers from a validity problem: it presupposes that these traits exist in LLMs and that they are measurable with tools designed for humans.
This study investigates this problem by comparing latent structures of personality between humans and three LLMs using two validated personality questionnaires.
arXiv Detail & Related papers (2024-09-06T12:42:35Z) - Large Language Models Playing Mixed Strategy Nash Equilibrium Games [1.060608983034705]
This paper focuses on the capabilities of Large Language Models to find the Nash equilibrium in games with a mixed strategy Nash equilibrium and no pure strategy Nash equilibrium.
The study reveals a significant enhancement in the performance of LLMs when they are equipped with the possibility to run code.
It is evident that while LLMs exhibit remarkable proficiency in well-known standard games, their performance dwindles when faced with slight modifications of the same games.
arXiv Detail & Related papers (2024-06-15T09:30:20Z) - GTBench: Uncovering the Strategic Reasoning Limitations of LLMs via Game-Theoretic Evaluations [87.99872683336395]
Large Language Models (LLMs) are integrated into critical real-world applications.
This paper evaluates LLMs' reasoning abilities in competitive environments.
We first propose GTBench, a language-driven environment composing 10 widely recognized tasks.
arXiv Detail & Related papers (2024-02-19T18:23:36Z) - See the Unseen: Better Context-Consistent Knowledge-Editing by Noises [73.54237379082795]
Knowledge-editing updates knowledge of large language models (LLMs)
Existing works ignore this property and the editing lacks generalization.
We empirically find that the effects of different contexts upon LLMs in recalling the same knowledge follow a Gaussian-like distribution.
arXiv Detail & Related papers (2024-01-15T09:09:14Z) - Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves [57.974103113675795]
We present a method named Rephrase and Respond' (RaR) which allows Large Language Models to rephrase and expand questions posed by humans.
RaR serves as a simple yet effective prompting method for improving performance.
We show that RaR is complementary to the popular Chain-of-Thought (CoT) methods, both theoretically and empirically.
arXiv Detail & Related papers (2023-11-07T18:43:34Z) - Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation [109.8527403904657]
We show that large language models (LLMs) possess unwavering confidence in their knowledge and cannot handle the conflict between internal and external knowledge well.
Retrieval augmentation proves to be an effective approach in enhancing LLMs' awareness of knowledge boundaries.
We propose a simple method to dynamically utilize supporting documents with our judgement strategy.
arXiv Detail & Related papers (2023-07-20T16:46:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.