Can Large Language Models Serve as Rational Players in Game Theory? A
Systematic Analysis
- URL: http://arxiv.org/abs/2312.05488v2
- Date: Tue, 12 Dec 2023 18:32:39 GMT
- Title: Can Large Language Models Serve as Rational Players in Game Theory? A
Systematic Analysis
- Authors: Caoyun Fan, Jindou Chen, Yaohui Jin, Hao He
- Abstract summary: This study systematically analyzes Large Language Models (LLMs) in the context of game theory.
Experiments indicate that even the current state-of-the-art LLM exhibits substantial disparities compared to humans in game theory.
- Score: 16.285154752969717
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Game theory, as an analytical tool, is frequently utilized to analyze human
behavior in social science research. With the high alignment between the
behavior of Large Language Models (LLMs) and humans, a promising research
direction is to employ LLMs as substitutes for humans in game experiments,
enabling social science research. However, despite numerous empirical
researches on the combination of LLMs and game theory, the capability
boundaries of LLMs in game theory remain unclear. In this research, we endeavor
to systematically analyze LLMs in the context of game theory. Specifically,
rationality, as the fundamental principle of game theory, serves as the metric
for evaluating players' behavior -- building a clear desire, refining belief
about uncertainty, and taking optimal actions. Accordingly, we select three
classical games (dictator game, Rock-Paper-Scissors, and ring-network game) to
analyze to what extent LLMs can achieve rationality in these three aspects. The
experimental results indicate that even the current state-of-the-art LLM
(GPT-4) exhibits substantial disparities compared to humans in game theory. For
instance, LLMs struggle to build desires based on uncommon preferences, fail to
refine belief from many simple patterns, and may overlook or modify refined
belief when taking actions. Therefore, we consider that introducing LLMs into
game experiments in the field of social science should be approached with
greater caution.
Related papers
- Take Caution in Using LLMs as Human Surrogates: Scylla Ex Machina [7.155982875107922]
Studies suggest large language models (LLMs) can exhibit human-like reasoning, aligning with human behavior in economic experiments, surveys, and political discourse.
This has led many to propose that LLMs can be used as surrogates for humans in social science research.
We assess the reasoning depth of LLMs using the 11-20 money request game.
arXiv Detail & Related papers (2024-10-25T14:46:07Z) - Nicer Than Humans: How do Large Language Models Behave in the Prisoner's Dilemma? [0.1474723404975345]
We study the cooperative behavior of Llama2 when playing the Iterated Prisoner's Dilemma against random adversaries displaying various levels of hostility.
We find that Llama2 tends not to initiate defection but it adopts a cautious approach towards cooperation.
In comparison to prior research on human participants, Llama2 exhibits a greater inclination towards cooperative behavior.
arXiv Detail & Related papers (2024-06-19T14:51:14Z) - Large Language Models Playing Mixed Strategy Nash Equilibrium Games [1.060608983034705]
This paper focuses on the capabilities of Large Language Models to find the Nash equilibrium in games with a mixed strategy Nash equilibrium and no pure strategy Nash equilibrium.
The study reveals a significant enhancement in the performance of LLMs when they are equipped with the possibility to run code.
It is evident that while LLMs exhibit remarkable proficiency in well-known standard games, their performance dwindles when faced with slight modifications of the same games.
arXiv Detail & Related papers (2024-06-15T09:30:20Z) - A Comprehensive Evaluation on Event Reasoning of Large Language Models [68.28851233753856]
How well LLMs accomplish event reasoning on various relations and reasoning paradigms remains unknown.
We introduce a novel benchmark EV2 for EValuation of EVent reasoning.
We find that LLMs have abilities to accomplish event reasoning but their performances are far from satisfactory.
arXiv Detail & Related papers (2024-04-26T16:28:34Z) - LogicBench: Towards Systematic Evaluation of Logical Reasoning Ability of Large Language Models [52.03659714625452]
Recently developed large language models (LLMs) have been shown to perform remarkably well on a wide range of language understanding tasks.
But, can they really "reason" over the natural language?
This question has been receiving significant research attention and many reasoning skills such as commonsense, numerical, and qualitative have been studied.
arXiv Detail & Related papers (2024-04-23T21:08:49Z) - GTBench: Uncovering the Strategic Reasoning Limitations of LLMs via Game-Theoretic Evaluations [87.99872683336395]
Large Language Models (LLMs) are integrated into critical real-world applications.
This paper evaluates LLMs' reasoning abilities in competitive environments.
We first propose GTBench, a language-driven environment composing 10 widely recognized tasks.
arXiv Detail & Related papers (2024-02-19T18:23:36Z) - How should the advent of large language models affect the practice of
science? [51.62881233954798]
How should the advent of large language models affect the practice of science?
We have invited four diverse groups of scientists to reflect on this query, sharing their perspectives and engaging in debate.
arXiv Detail & Related papers (2023-12-05T10:45:12Z) - ALYMPICS: LLM Agents Meet Game Theory -- Exploring Strategic
Decision-Making with AI Agents [77.34720446306419]
Alympics is a systematic simulation framework utilizing Large Language Model (LLM) agents for game theory research.
Alympics creates a versatile platform for studying complex game theory problems.
arXiv Detail & Related papers (2023-11-06T16:03:46Z) - MoCa: Measuring Human-Language Model Alignment on Causal and Moral
Judgment Tasks [49.60689355674541]
A rich literature in cognitive science has studied people's causal and moral intuitions.
This work has revealed a number of factors that systematically influence people's judgments.
We test whether large language models (LLMs) make causal and moral judgments about text-based scenarios that align with human participants.
arXiv Detail & Related papers (2023-10-30T15:57:32Z) - Are LLMs the Master of All Trades? : Exploring Domain-Agnostic Reasoning
Skills of LLMs [0.0]
This study aims to investigate the performance of large language models (LLMs) on different reasoning tasks.
My findings indicate that LLMs excel at analogical and moral reasoning, yet struggle to perform as proficiently on spatial reasoning tasks.
arXiv Detail & Related papers (2023-03-22T22:53:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.