AI sustains higher strategic tension than humans in chess
- URL: http://arxiv.org/abs/2508.13213v1
- Date: Sat, 16 Aug 2025 22:53:34 GMT
- Title: AI sustains higher strategic tension than humans in chess
- Authors: Adamo Cerioli, Edward D. Lee, Vito D. P. Servedio,
- Abstract summary: Strategic decision-making involves managing the tension between immediate opportunities and long-term objectives.<n>We study this trade-off in chess by characterizing and comparing dynamics between human vs human and AI vs AI games.<n>We propose a network-based metric to quantify the ongoing strategic tension on the board.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Strategic decision-making involves managing the tension between immediate opportunities and long-term objectives. We study this trade-off in chess by characterizing and comparing dynamics between human vs human and AI vs AI games. We propose a network-based metric of piece-to-piece interaction to quantify the ongoing strategic tension on the board. Its evolution in games reveals that the most competitive AI players sustain higher levels of strategic tension for longer durations than elite human players. Cumulative tension varies with algorithmic complexity for AI and correspondingly in human-played games increases abruptly with expertise at about 1600 Elo and again at 2300 Elo. The profiles reveal different approaches. Highly competitive AI tolerates interconnected positions balanced between offensive and defensive tactics over long periods. Human play, in contrast, limits tension and game complexity, which may reflect cognitive limitations and adaptive strategies. The difference may have implications for AI usage in complex, strategic environments.
Related papers
- Enhancing Language Agent Strategic Reasoning through Self-Play in Adversarial Games [60.213483076150844]
We propose a Step-level poliCy Optimization method through Play-And-Learn, SCO-PAL.<n>We conduct a detailed analysis of opponent selection by setting opponents at different levels and find that self-play is the most effective way to improve strategic reasoning.<n>We achieve a 54.76% win rate against GPT-4 in six adversarial games.
arXiv Detail & Related papers (2025-10-19T09:03:28Z) - A Behavior-Based Knowledge Representation Improves Prediction of Players' Moves in Chess by 25% [2.232417329532027]
This paper proposes a novel approach combining expert knowledge with machine learning techniques to predict human players' next moves.<n>By applying feature engineering grounded in domain expertise, we seek to uncover the patterns in the moves of intermediate-level chess players.<n>Our methodology offers a promising framework for anticipating human behavior, advancing both the fields of AI and human-computer interaction.
arXiv Detail & Related papers (2025-04-07T18:49:00Z) - Superintelligence Strategy: Expert Version [64.7113737051525]
Destabilizing AI developments could raise the odds of great-power conflict.<n>Superintelligence -- AI vastly better than humans at nearly all cognitive tasks -- is now anticipated by AI researchers.<n>We introduce the concept of Mutual Assured AI Malfunction.
arXiv Detail & Related papers (2025-03-07T17:53:24Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We examine what is known about human wisdom and sketch a vision of its AI counterpart.<n>We argue that AI systems particularly struggle with metacognition.<n>We discuss how wise AI might be benchmarked, trained, and implemented.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Human-aligned Chess with a Bit of Search [35.16633353273246]
Chess has long been a testbed for AI's quest to match human intelligence.
In this paper, we introduce Allie, a chess-playing AI designed to bridge the gap between artificial and human intelligence in this classic game.
arXiv Detail & Related papers (2024-10-04T19:51:03Z) - Human vs. Machine: Behavioral Differences Between Expert Humans and Language Models in Wargame Simulations [1.6108153271585284]
We show that large language models (LLMs) behave differently compared to humans in high-stakes military decision-making scenarios.
Our results motivate policymakers to be cautious before granting autonomy or following AI-based strategy recommendations.
arXiv Detail & Related papers (2024-03-06T02:23:32Z) - Toward Human-AI Alignment in Large-Scale Multi-Player Games [24.784173202415687]
We analyze extensive human gameplay data from Xbox's Bleeding Edge (100K+ games)
We find that while human players exhibit variability in fight-flight and explore-exploit behavior, AI players tend towards uniformity.
These stark differences underscore the need for interpretable evaluation, design, and integration of AI in human-aligned applications.
arXiv Detail & Related papers (2024-02-05T22:55:33Z) - DanZero+: Dominating the GuanDan Game through Reinforcement Learning [95.90682269990705]
We develop an AI program for an exceptionally complex and popular card game called GuanDan.
We first put forward an AI program named DanZero for this game.
In order to further enhance the AI's capabilities, we apply policy-based reinforcement learning algorithm to GuanDan.
arXiv Detail & Related papers (2023-12-05T08:07:32Z) - Diversity-based Deep Reinforcement Learning Towards Multidimensional
Difficulty for Fighting Game AI [0.9645196221785693]
We introduce a diversity-based deep reinforcement learning approach for generating a set of agents of similar difficulty.
We find this approach outperforms a baseline trained with specialized, human-authored reward functions in both diversity and performance.
arXiv Detail & Related papers (2022-11-04T21:49:52Z) - Mastering the Game of Stratego with Model-Free Multiagent Reinforcement
Learning [86.37438204416435]
Stratego is one of the few iconic board games that Artificial Intelligence (AI) has not yet mastered.
Decisions in Stratego are made over a large number of discrete actions with no obvious link between action and outcome.
DeepNash beats existing state-of-the-art AI methods in Stratego and achieved a yearly (2022) and all-time top-3 rank on the Gravon games platform.
arXiv Detail & Related papers (2022-06-30T15:53:19Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - Multi-AI competing and winning against humans in iterated
Rock-Paper-Scissors game [4.2124879433151605]
We use an AI algorithm based on Markov Models of one fixed memory length to compete against humans in an iterated Rock Paper Scissors game.
We develop an architecture of multi-AI with changeable parameters to adapt to different competition strategies.
Our strategy could win against more than 95% of human opponents.
arXiv Detail & Related papers (2020-03-15T06:39:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.