Knowledge and Common Knowledge of Strategies
- URL: http://arxiv.org/abs/2510.19298v1
- Date: Wed, 22 Oct 2025 07:00:33 GMT
- Title: Knowledge and Common Knowledge of Strategies
- Authors: Borja Sierra Miranda, Thomas Studer,
- Abstract summary: We propose a model where knowledge of strategies can be specified on a fine-grained level.<n>It is possible to distinguish first-order, higher-order, and common knowledge of strategies.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing work on strategic reasoning simply adopts either an informed or an uninformed semantics. We propose a model where knowledge of strategies can be specified on a fine-grained level. In particular, it is possible to distinguish first-order, higher-order, and common knowledge of strategies. We illustrate the effect of higher-order knowledge of strategies by studying the game Hanabi. Further, we show that common knowledge of strategies is necessary to solve the consensus problem. Finally, we study the decidability of the model checking problem.
Related papers
- Experience-driven discovery of planning strategies [0.9821874476902969]
We show that new planning strategies are discovered through metacognitive reinforcement learning.<n>When fitted to human data, these models exhibit a slower discovery rate than humans, leaving room for improvement.
arXiv Detail & Related papers (2024-12-04T08:20:03Z) - Learnability Gaps of Strategic Classification [68.726857356532]
We focus on addressing a fundamental question: the learnability gaps between strategic classification and standard learning.
We provide nearly tight sample complexity and regret bounds, offering significant improvements over prior results.
Notably, our algorithm in this setting is of independent interest and can be applied to other problems such as multi-label learning.
arXiv Detail & Related papers (2024-02-29T16:09:19Z) - Program-Based Strategy Induction for Reinforcement Learning [5.657991642023959]
We use Bayesian program induction to discover strategies implemented by programs, letting the simplicity of strategies trade off against their effectiveness.
We find strategies that are difficult or unexpected with classical incremental learning, like asymmetric learning from rewarded and unrewarded trials, adaptive horizon-dependent random exploration, and discrete state switching.
arXiv Detail & Related papers (2024-02-26T15:40:46Z) - K-Level Reasoning: Establishing Higher Order Beliefs in Large Language Models for Strategic Reasoning [76.3114831562989]
It requires Large Language Model (LLM) agents to adapt their strategies dynamically in multi-agent environments.
We propose a novel framework: "K-Level Reasoning with Large Language Models (K-R)"
arXiv Detail & Related papers (2024-02-02T16:07:05Z) - On strategies for risk management and decision making under uncertainty shared across multiple fields [55.2480439325792]
The paper finds more than 110 examples of such strategies and this approach to risk is termed RDOT: Risk-reducing Design and Operations Toolkit.<n>RDOT strategies fall into six broad categories: structural, reactive, formal, adversarial, multi-stage and positive.<n>Overall, RDOT represents an overlooked class of versatile responses to uncertainty.
arXiv Detail & Related papers (2023-09-06T16:14:32Z) - Causal Reinforcement Learning: A Survey [57.368108154871]
Reinforcement learning is an essential paradigm for solving sequential decision problems under uncertainty.
One of the main obstacles is that reinforcement learning agents lack a fundamental understanding of the world.
Causality offers a notable advantage as it can formalize knowledge in a systematic manner.
arXiv Detail & Related papers (2023-07-04T03:00:43Z) - Strategic Reasoning with Language Models [35.63300060111918]
Strategic reasoning enables agents to cooperate, communicate, and compete with other agents in diverse situations.
Existing approaches to solving strategic games rely on extensive training, yielding strategies that do not generalize to new scenarios or games without retraining.
This paper introduces an approach that uses pretrained Large Language Models with few-shot chain-of-thought examples to enable strategic reasoning for AI agents.
arXiv Detail & Related papers (2023-05-30T16:09:19Z) - Strategy Extraction in Single-Agent Games [0.19336815376402716]
We propose an approach to knowledge transfer using behavioural strategies as a form of transferable knowledge influenced by the human cognitive ability to develop strategies.
We show that our method can identify plausible strategies in three environments: Pacman, Bank Heist and a dungeon-crawling video game.
arXiv Detail & Related papers (2023-05-22T01:28:59Z) - A Closer Look at Knowledge Distillation with Features, Logits, and
Gradients [81.39206923719455]
Knowledge distillation (KD) is a substantial strategy for transferring learned knowledge from one neural network model to another.
This work provides a new perspective to motivate a set of knowledge distillation strategies by approximating the classical KL-divergence criteria with different knowledge sources.
Our analysis indicates that logits are generally a more efficient knowledge source and suggests that having sufficient feature dimensions is crucial for the model design.
arXiv Detail & Related papers (2022-03-18T21:26:55Z) - Strategic Classification Made Practical [8.778578967271866]
We present a learning framework for strategic classification that is practical.
Our approach directly minimizes the "strategic" empirical risk, achieved by differentiating through the strategic response of users.
A series of experiments demonstrates the effectiveness of our approach on various learning settings.
arXiv Detail & Related papers (2021-03-02T16:03:26Z) - Incremental Knowledge Based Question Answering [52.041815783025186]
We propose a new incremental KBQA learning framework that can progressively expand learning capacity as humans do.
Specifically, it comprises a margin-distilled loss and a collaborative selection method, to overcome the catastrophic forgetting problem.
The comprehensive experiments demonstrate its effectiveness and efficiency when working with the evolving knowledge base.
arXiv Detail & Related papers (2021-01-18T09:03:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.