Can Large Language Models Develop Strategic Reasoning? Post-training Insights from Learning Chess
- URL: http://arxiv.org/abs/2507.00726v3
- Date: Wed, 27 Aug 2025 22:56:13 GMT
- Title: Can Large Language Models Develop Strategic Reasoning? Post-training Insights from Learning Chess
- Authors: Dongyoon Hwang, Hojoon Lee, Jaegul Choo, Dongmin Park, Jongho Park,
- Abstract summary: We investigate whether large language models (LLMs) can develop strategic reasoning capabilities through reinforcement learning (RL) in chess.<n>Our experiments show that our distillation-based dense rewards often outperform sparse binary rewards.<n>We provide SFT and RL ablations on chess reasoning training and find evidence that this limitation stems from a deficit in the pretrained models' internal understanding of chess.
- Score: 54.5355907369231
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While reinforcement learning (RL) for large language models (LLMs) has shown promise in mathematical reasoning, strategic reasoning for LLMs using RL remains largely unexplored. We investigate whether LLMs can develop strategic reasoning capabilities through RL in chess. To this end, we leverage a chess-pretrained action-value network to provide dense reward on the LLM's output move quality, which can be seen as a form of knowledge distillation. Our experiments show that our distillation-based dense rewards often outperform sparse binary rewards. However, surprisingly, all models plateau far below expert levels. We provide SFT and RL ablations on chess reasoning training and find evidence that this limitation stems from a deficit in the pretrained models' internal understanding of chess-a deficit which RL alone may not be able to fully overcome. The code is available at https://github.com/krafton-ai/Chess-R1.
Related papers
- SAIL-RL: Guiding MLLMs in When and How to Think via Dual-Reward RL Tuning [48.43989881030515]
We introduce a reinforcement learning framework that enhances the reasoning capabilities of large language models (MLLMs) by teaching them when and how to think.<n>SAIL-RL addresses these challenges with a dual reward system: the Thinking Reward, which evaluates reasoning quality through factual grounding, logical coherence, and answer consistency, and the Judging Reward, which adaptively determines whether deep reasoning or direct answering is appropriate.
arXiv Detail & Related papers (2025-11-04T05:34:06Z) - From $f(x)$ and $g(x)$ to $f(g(x))$: LLMs Learn New Skills in RL by Composing Old Ones [68.68686526804909]
We show that LLMs can acquire genuinely new skills during RL by composing existing ones.<n>Our experiments show that compositional skill acquired on a source task transfers to a different target task.<n>This transfer happens even without compositional training on the target, requiring only prior knowledge of the target's atomic skills.
arXiv Detail & Related papers (2025-09-29T17:44:27Z) - ChessArena: A Chess Testbed for Evaluating Strategic Reasoning Capabilities of Large Language Models [11.234477661864736]
This paper presents a chess testbed, ChessArena, to evaluate the strategic reasoning capabilities of large language models (LLMs)<n> Chess requires complex strategic reasoning capabilities including long-term planning, strict rule comprehension, and multi-turn conversation memorization.<n>We show that no model can beat Maia-1100 (a chess engine at human amateur level), while some even failed to defeat a random player that selects moves arbitrarily.<n>We also present a strong baseline to the testbed: our fine-tuned Qwen3-8B substantially improved performance, approaching much larger state-of-the-art reasoning models.
arXiv Detail & Related papers (2025-09-29T03:24:48Z) - A Survey of Reinforcement Learning for Large Reasoning Models [98.58081012669369]
Review of recent advances in Reinforcement Learning for reasoning with Large Language Models.<n>Further scaling of RL for LRMs now faces challenges not only in computational resources but also in algorithm design, training data, and infrastructure.
arXiv Detail & Related papers (2025-09-10T17:59:43Z) - Can Large Language Models Master Complex Card Games? [18.39826127562161]
Large language models (LLMs) have exhibited remarkable capabilities across various tasks.<n>We show that LLMs can approach the performance of strong game AIs through supervised fine-tuning on high-quality data.<n>LLMs experience a decline in general capabilities when mastering complex games, but this decline can be mitigated by integrating a certain amount of general instruction data.
arXiv Detail & Related papers (2025-09-01T10:11:56Z) - Game-RL: Synthesizing Multimodal Verifiable Game Data to Boost VLMs' General Reasoning [89.93384726755106]
Vision-language reinforcement learning (RL) has primarily focused on narrow domains.<n>We find video games inherently provide rich visual elements and mechanics that are easy to verify.<n>To fully use the multimodal and verifiable reward in video games, we propose Game-RL.
arXiv Detail & Related papers (2025-05-20T03:47:44Z) - LLMs are Greedy Agents: Effects of RL Fine-tuning on Decision-Making Abilities [21.42711537107199]
We study why Large Language Models (LLMs) perform sub-optimally in decision-making scenarios.<n>We propose mitigation of these shortcomings by fine-tuning via Reinforcement Learning (RL) on self-generated CoT rationales.
arXiv Detail & Related papers (2025-04-22T17:57:14Z) - Empowering LLMs in Decision Games through Algorithmic Data Synthesis [29.128280701799074]
Decision-making games serve as ideal sandboxes for evaluating and enhancing the reasoning abilities of Large Language Models.<n>We design data synthesis strategies and curate extensive offline datasets from two classic games, Doudizhu and Go.<n>We develop a suite of techniques to effectively incorporate this data into LLM training, resulting in two novel agents: Mastermind-Dou and Mastermind-Go.
arXiv Detail & Related papers (2025-03-18T07:30:29Z) - Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search [57.28671084993782]
Large language models (LLMs) have demonstrated remarkable reasoning capabilities across diverse domains.<n>Recent studies have shown that increasing test-time computation enhances LLMs' reasoning capabilities.<n>We propose a two-stage training paradigm: 1) a small-scale format tuning stage to internalize the COAT reasoning format and 2) a large-scale self-improvement stage leveraging reinforcement learning.
arXiv Detail & Related papers (2025-02-04T17:26:58Z) - SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training [127.47044960572659]
Supervised fine-tuning (SFT) and reinforcement learning (RL) are widely used post-training techniques for foundation models.<n>This paper studies the difference between SFT and RL on generalization and memorization.<n>We show that RL, especially when trained with an outcome-based reward, generalizes across both rule-based textual and visual variants.
arXiv Detail & Related papers (2025-01-28T18:59:44Z) - Large Language Models Playing Mixed Strategy Nash Equilibrium Games [1.060608983034705]
This paper focuses on the capabilities of Large Language Models to find the Nash equilibrium in games with a mixed strategy Nash equilibrium and no pure strategy Nash equilibrium.
The study reveals a significant enhancement in the performance of LLMs when they are equipped with the possibility to run code.
It is evident that while LLMs exhibit remarkable proficiency in well-known standard games, their performance dwindles when faced with slight modifications of the same games.
arXiv Detail & Related papers (2024-06-15T09:30:20Z) - Teaching Large Language Models to Reason with Reinforcement Learning [38.17625148525193]
Reinforcement Learning from Human Feedback (textbfRLHF) has emerged as a dominant approach for aligning LLM outputs with human preferences.
Inspired by the success of RLHF, we study the performance of multiple algorithms that learn from feedback.
arXiv Detail & Related papers (2024-03-07T16:36:29Z) - GTBench: Uncovering the Strategic Reasoning Limitations of LLMs via Game-Theoretic Evaluations [87.99872683336395]
Large Language Models (LLMs) are integrated into critical real-world applications.
This paper evaluates LLMs' reasoning abilities in competitive environments.
We first propose GTBench, a language-driven environment composing 10 widely recognized tasks.
arXiv Detail & Related papers (2024-02-19T18:23:36Z) - ALYMPICS: LLM Agents Meet Game Theory -- Exploring Strategic
Decision-Making with AI Agents [77.34720446306419]
Alympics is a systematic simulation framework utilizing Large Language Model (LLM) agents for game theory research.
Alympics creates a versatile platform for studying complex game theory problems.
arXiv Detail & Related papers (2023-11-06T16:03:46Z) - LaGR-SEQ: Language-Guided Reinforcement Learning with Sample-Efficient
Querying [71.86163159193327]
Large language models (LLMs) have recently demonstrated their impressive ability to provide context-aware responses via text.
This ability could potentially be used to predict plausible solutions in sequential decision making tasks pertaining to pattern completion.
We introduce LaGR, which uses this predictive ability of LLMs to propose solutions to tasks that have been partially completed by a primary reinforcement learning (RL) agent.
arXiv Detail & Related papers (2023-08-21T02:07:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.