Game Theory Meets Large Language Models: A Systematic Survey
- URL: http://arxiv.org/abs/2502.09053v1
- Date: Thu, 13 Feb 2025 08:08:27 GMT
- Title: Game Theory Meets Large Language Models: A Systematic Survey
- Authors: Haoran Sun, Yusen Wu, Yukun Cheng, Xu Chu,
- Abstract summary: The rapid advancement of large language models (LLMs) has sparked extensive research exploring the intersection of these two fields.<n>This paper presents a comprehensive survey of the intersection of these fields, exploring a bidirectional relationship from three perspectives.<n>By bridging theoretical rigor with emerging AI capabilities, this survey aims to foster interdisciplinary collaboration and drive progress in this evolving research area.
- Score: 18.07120579043073
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Game theory establishes a fundamental framework for analyzing strategic interactions among rational decision-makers. The rapid advancement of large language models (LLMs) has sparked extensive research exploring the intersection of these two fields. Specifically, game-theoretic methods are being applied to evaluate and enhance LLM capabilities, while LLMs themselves are reshaping classic game models. This paper presents a comprehensive survey of the intersection of these fields, exploring a bidirectional relationship from three perspectives: (1) Establishing standardized game-based benchmarks for evaluating LLM behavior; (2) Leveraging game-theoretic methods to improve LLM performance through algorithmic innovations; (3) Characterizing the societal impacts of LLMs through game modeling. Among these three aspects, we also highlight how the equilibrium analysis for traditional game models is impacted by LLMs' advanced language understanding, which in turn extends the study of game theory. Finally, we identify key challenges and future research directions, assessing their feasibility based on the current state of the field. By bridging theoretical rigor with emerging AI capabilities, this survey aims to foster interdisciplinary collaboration and drive progress in this evolving research area.
Related papers
- A Call for New Recipes to Enhance Spatial Reasoning in MLLMs [85.67171333213301]
Multimodal Large Language Models (MLLMs) have demonstrated impressive performance in general vision-language tasks.
Recent studies have exposed critical limitations in their spatial reasoning capabilities.
This deficiency in spatial reasoning significantly constrains MLLMs' ability to interact effectively with the physical world.
arXiv Detail & Related papers (2025-04-21T11:48:39Z) - Empowering LLMs in Decision Games through Algorithmic Data Synthesis [29.128280701799074]
Decision-making games serve as ideal sandboxes for evaluating and enhancing the reasoning abilities of Large Language Models.
We design data synthesis strategies and curate extensive offline datasets from two classic games, Doudizhu and Go.
We develop a suite of techniques to effectively incorporate this data into LLM training, resulting in two novel agents: Mastermind-Dou and Mastermind-Go.
arXiv Detail & Related papers (2025-03-18T07:30:29Z) - When Continue Learning Meets Multimodal Large Language Model: A Survey [7.250878248686215]
Fine-tuning MLLMs for specific tasks often causes performance degradation in the model's prior knowledge domain.
This review paper presents an overview and analysis of 440 research papers in this area.
arXiv Detail & Related papers (2025-02-27T03:39:10Z) - Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search [57.28671084993782]
Large language models (LLMs) have demonstrated remarkable reasoning capabilities across diverse domains.<n>Recent studies have shown that increasing test-time computation enhances LLMs' reasoning capabilities.<n>We propose a two-stage training paradigm: 1) a small-scale format tuning stage to internalize the COAT reasoning format and 2) a large-scale self-improvement stage leveraging reinforcement learning.
arXiv Detail & Related papers (2025-02-04T17:26:58Z) - Game-theoretic LLM: Agent Workflow for Negotiation Games [30.83905391503607]
This paper investigates the rationality of large language models (LLMs) in strategic decision-making contexts.
We design multiple game-theoretic that guide the reasoning and decision-making processes of LLMs.
The findings have implications for the development of more robust and strategically sound AI agents.
arXiv Detail & Related papers (2024-11-08T22:02:22Z) - How Far Are We on the Decision-Making of LLMs? Evaluating LLMs' Gaming Ability in Multi-Agent Environments [83.78240828340681]
GAMA($gamma$)-Bench is a new framework for evaluating Large Language Models' Gaming Ability in Multi-Agent environments.<n>$gamma$-Bench includes eight classical game theory scenarios and a dynamic scoring scheme specially designed to assess LLMs' performance.<n>Our results indicate GPT-3.5 demonstrates strong robustness but limited generalizability, which can be enhanced using methods like Chain-of-Thought.
arXiv Detail & Related papers (2024-03-18T14:04:47Z) - LLM Inference Unveiled: Survey and Roofline Model Insights [62.92811060490876]
Large Language Model (LLM) inference is rapidly evolving, presenting a unique blend of opportunities and challenges.
Our survey stands out from traditional literature reviews by not only summarizing the current state of research but also by introducing a framework based on roofline model.
This framework identifies the bottlenecks when deploying LLMs on hardware devices and provides a clear understanding of practical problems.
arXiv Detail & Related papers (2024-02-26T07:33:05Z) - GTBench: Uncovering the Strategic Reasoning Limitations of LLMs via Game-Theoretic Evaluations [87.99872683336395]
Large Language Models (LLMs) are integrated into critical real-world applications.
This paper evaluates LLMs' reasoning abilities in competitive environments.
We first propose GTBench, a language-driven environment composing 10 widely recognized tasks.
arXiv Detail & Related papers (2024-02-19T18:23:36Z) - Large Language Models for Causal Discovery: Current Landscape and Future Directions [5.540272236593385]
Causal discovery (CD) and Large Language Models (LLMs) have emerged as transformative fields in artificial intelligence.
This survey examines how LLMs are transforming CD across three key dimensions: direct causal extraction from text, integration of domain knowledge into statistical methods, and refinement of causal structures.
arXiv Detail & Related papers (2024-02-16T20:48:53Z) - Evolutionary Computation in the Era of Large Language Model: Survey and Roadmap [26.959633651475016]
The interplay between large language models (LLMs) and evolutionary algorithms (EAs) share a common pursuit of applicability in complex problems.
The abundant domain knowledge inherent in LLMs could enable EA to conduct more intelligent searches.
This paper provides a thorough review and a forward-looking roadmap, categorizing the reciprocal inspiration into two main avenues.
arXiv Detail & Related papers (2024-01-18T14:58:17Z) - ALYMPICS: LLM Agents Meet Game Theory -- Exploring Strategic
Decision-Making with AI Agents [77.34720446306419]
Alympics is a systematic simulation framework utilizing Large Language Model (LLM) agents for game theory research.
Alympics creates a versatile platform for studying complex game theory problems.
arXiv Detail & Related papers (2023-11-06T16:03:46Z) - A Comprehensive Overview of Large Language Models [68.22178313875618]
Large Language Models (LLMs) have recently demonstrated remarkable capabilities in natural language processing tasks.
This article provides an overview of the existing literature on a broad range of LLM-related concepts.
arXiv Detail & Related papers (2023-07-12T20:01:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.