Do Persona-Infused LLMs Affect Performance in a Strategic Reasoning Game?
- URL: http://arxiv.org/abs/2512.06867v1
- Date: Sun, 07 Dec 2025 14:42:29 GMT
- Title: Do Persona-Infused LLMs Affect Performance in a Strategic Reasoning Game?
- Authors: John Licato, Stephen Steinle, Brayden Hollis,
- Abstract summary: We investigate the impact of persona prompting on strategic performance in PERIL, a world-domination board game.<n>Our findings reveal that certain personas associated with strategic thinking improve game performance, but only when a mediator is used to translate personas into values.
- Score: 2.599882743586164
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although persona prompting in large language models appears to trigger different styles of generated text, it is unclear whether these translate into measurable behavioral differences, much less whether they affect decision-making in an adversarial strategic environment that we provide as open-source. We investigate the impact of persona prompting on strategic performance in PERIL, a world-domination board game. Specifically, we compare the effectiveness of persona-derived heuristic strategies to those chosen manually. Our findings reveal that certain personas associated with strategic thinking improve game performance, but only when a mediator is used to translate personas into heuristic values. We introduce this mediator as a structured translation process, inspired by exploratory factor analysis, that maps LLM-generated inventory responses into heuristics. Results indicate our method enhances heuristic reliability and face validity compared to directly inferred heuristics, allowing us to better study the effect of persona types on decision making. These insights advance our understanding of how persona prompting influences LLM-based decision-making and propose a heuristic generation method that applies psychometric principles to LLMs.
Related papers
- PATS: Personality-Aware Teaching Strategies with Large Language Model Tutors [66.56586559631516]
Large language models (LLMs) have potential as educational tutors.<n>But different tutoring strategies benefit different student personalities.<n>Despite this, current LLM tutoring systems do not take into account student personality traits.
arXiv Detail & Related papers (2026-01-13T10:17:26Z) - Beyond Survival: Evaluating LLMs in Social Deduction Games with Human-Aligned Strategies [54.08697738311866]
Social deduction games like Werewolf combine language, reasoning, and strategy.<n>We curate a high-quality, human-verified multimodal Werewolf dataset containing over 100 hours of video, 32.4M utterance tokens, and 15 rule variants.<n>We propose a novel strategy-alignment evaluation that leverages the winning faction's strategies as ground truth in two stages.
arXiv Detail & Related papers (2025-10-13T13:33:30Z) - InMind: Evaluating LLMs in Capturing and Applying Individual Human Reasoning Styles [39.025684190110276]
Social deduction games provide a natural testbed for evaluating individualized reasoning styles.<n>We introduce InMind, a cognitively grounded evaluation framework designed to assess whether LLMs can capture and apply personalized reasoning styles.<n>As a case study, we apply InMind to the game Avalon, evaluating 11 state-of-the-art LLMs.
arXiv Detail & Related papers (2025-08-22T04:04:00Z) - Who is a Better Player: LLM against LLM [53.46608216197315]
We propose an adversarial benchmarking framework to assess the comprehensive performance of Large Language Models (LLMs) through board games competition.<n>We introduce Qi Town, a specialized evaluation platform that supports 5 widely played games and involves 20 LLM-driven players.
arXiv Detail & Related papers (2025-08-05T06:41:47Z) - Beyond Nash Equilibrium: Bounded Rationality of LLMs and humans in Strategic Decision-making [33.2843381902912]
Large language models are increasingly used in strategic decision-making settings.<n>We compare LLMs and humans using experimental paradigms adapted from behavioral game-theory research.
arXiv Detail & Related papers (2025-06-11T04:43:54Z) - On the Adaptive Psychological Persuasion of Large Language Models [37.18479986426215]
We show that Large Language Models (LLMs) can autonomously persuade and resist persuasion.<n>We introduce eleven comprehensive psychological persuasion strategies.<n>We propose an adaptive framework that trains LLMs to autonomously select optimal strategies.
arXiv Detail & Related papers (2025-06-07T13:52:50Z) - Investigating Context Effects in Similarity Judgements in Large Language Models [6.421776078858197]
Large Language Models (LLMs) have revolutionised the capability of AI models in comprehending and generating natural language text.
We report an ongoing investigation on alignment of LLMs with human judgements affected by order bias.
arXiv Detail & Related papers (2024-08-20T10:26:02Z) - Are Large Language Models Strategic Decision Makers? A Study of Performance and Bias in Two-Player Non-Zero-Sum Games [56.70628673595041]
Large Language Models (LLMs) have been increasingly used in real-world settings, yet their strategic decision-making abilities remain largely unexplored.
This work investigates the performance and merits of LLMs in canonical game-theoretic two-player non-zero-sum games, Stag Hunt and Prisoner Dilemma.
Our structured evaluation of GPT-3.5, GPT-4-Turbo, GPT-4o, and Llama-3-8B shows that these models, when making decisions in these games, are affected by at least one of the following systematic biases.
arXiv Detail & Related papers (2024-07-05T12:30:02Z) - Character is Destiny: Can Role-Playing Language Agents Make Persona-Driven Decisions? [59.0123596591807]
We benchmark the ability of Large Language Models (LLMs) in persona-driven decision-making.
We investigate whether LLMs can predict characters' decisions provided by the preceding stories in high-quality novels.
The results demonstrate that state-of-the-art LLMs exhibit promising capabilities in this task, yet substantial room for improvement remains.
arXiv Detail & Related papers (2024-04-18T12:40:59Z) - From Heuristic to Analytic: Cognitively Motivated Strategies for
Coherent Physical Commonsense Reasoning [66.98861219674039]
Heuristic-Analytic Reasoning (HAR) strategies drastically improve the coherence of rationalizations for model decisions.
Our findings suggest that human-like reasoning strategies can effectively improve the coherence and reliability of PLM reasoning.
arXiv Detail & Related papers (2023-10-24T19:46:04Z) - Introspective Tips: Large Language Model for In-Context Decision Making [48.96711664648164]
We employ Introspective Tips" to facilitate large language models (LLMs) in self-optimizing their decision-making.
Our method enhances the agent's performance in both few-shot and zero-shot learning situations.
Experiments involving over 100 games in TextWorld illustrate the superior performance of our approach.
arXiv Detail & Related papers (2023-05-19T11:20:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.