Strategy Logic, Imperfect Information, and Hyperproperties
- URL: http://arxiv.org/abs/2510.03952v1
- Date: Sat, 04 Oct 2025 21:37:14 GMT
- Title: Strategy Logic, Imperfect Information, and Hyperproperties
- Authors: Raven Beutner, Bernd Finkbeiner,
- Abstract summary: We study the relation between SL$_mathitii$ and HyperSL.<n>For the former direction, we build on the well-known observation that imperfect information is a hyperproperty.<n>We show how we can simulate hyperproperties using imperfect information.
- Score: 7.09016563801433
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Strategy logic (SL) is a powerful temporal logic that enables first-class reasoning over strategic behavior in multi-agent systems (MAS). In many MASs, the agents (and their strategies) cannot observe the global state of the system, leading to many extensions of SL centered around imperfect information, such as strategy logic with imperfect information (SL$_\mathit{ii}$). Along orthogonal lines, researchers have studied the combination of strategic behavior and hyperproperties. Hyperproperties are system properties that relate multiple executions in a system and commonly arise when specifying security policies. Hyper Strategy Logic (HyperSL) is a temporal logic that combines quantification over strategies with the ability to express hyperproperties on the executions of different strategy profiles. In this paper, we study the relation between SL$_\mathit{ii}$ and HyperSL. Our main result is that both logics (restricted to formulas where no state formulas are nested within path formulas) are equivalent in the sense that we can encode SL$_\mathit{ii}$ instances into HyperSL instances and vice versa. For the former direction, we build on the well-known observation that imperfect information is a hyperproperty. For the latter direction, we construct a self-composition of MASs and show how we can simulate hyperproperties using imperfect information.
Related papers
- Step-Level Sparse Autoencoder for Reasoning Process Interpretation [48.99201531966593]
Large Language Models (LLMs) have achieved strong complex reasoning capabilities through Chain-of-Thought (CoT) reasoning.<n>We propose step-level sparse autoencoder (SSAE), which serves as an analytical tool to disentangle different aspects of LLMs' reasoning steps into sparse features.<n> Experiments on multiple base models and reasoning tasks show the effectiveness of the extracted features.
arXiv Detail & Related papers (2026-03-03T14:25:02Z) - SHINE: A Scalable In-Context Hypernetwork for Mapping Context to LoRA in a Single Pass [55.28352410490407]
SHINE is a scalable hypernetwork that can map diverse meaningful contexts into high-quality LoRA adapters for large language models (LLM)<n>We introduce a pretraining and instruction fine-tuning pipeline, and train our hypernetwork to generate high quality LoRA adapters in a single forward pass.<n>Our work achieves outstanding results on various tasks, greatly saves time, computation and memory costs compared to SFT-based LLM adaptation, and shows great potential for scaling.
arXiv Detail & Related papers (2026-02-06T03:40:31Z) - Zero-Shot Instruction Following in RL via Structured LTL Representations [54.08661695738909]
Linear temporal logic (LTL) is a compelling framework for specifying complex, structured tasks for reinforcement learning (RL) agents.<n>Recent work has shown that interpreting instructions as finite automata, which can be seen as high-level programs monitoring task progress, enables learning a single generalist policy capable of executing arbitrary instructions at test time.<n>We propose a novel approach to learning a multi-task policy for following arbitrary instructions that addresses this shortcoming.
arXiv Detail & Related papers (2025-12-02T10:44:51Z) - Plan before Solving: Problem-Aware Strategy Routing for Mathematical Reasoning with LLMs [49.995906301946]
Existing methods usually leverage a fixed strategy to guide Large Language Models (LLMs) to perform mathematical reasoning.<n>Our analysis reveals that the single strategy cannot adapt to problem-specific requirements and thus overlooks the trade-off between effectiveness and efficiency.<n>We propose Planning and Routing through Instance-Specific Modeling (PRISM), a novel framework that decouples mathematical reasoning into two stages: strategy planning and targeted execution.
arXiv Detail & Related papers (2025-09-29T07:22:41Z) - WebSailor-V2: Bridging the Chasm to Proprietary Agents via Synthetic Data and Scalable Reinforcement Learning [73.91893534088798]
WebSailor is a complete post-training methodology designed to instill this crucial capability.<n>Our approach involves generating novel, high-uncertainty tasks through structured sampling and information obfuscation.<n>WebSailor significantly outperforms all open-source agents in complex information-seeking tasks.
arXiv Detail & Related papers (2025-09-16T17:57:03Z) - Emergent Hierarchical Reasoning in LLMs through Reinforcement Learning [56.496001894673235]
Reinforcement Learning (RL) has proven highly effective at enhancing the complex reasoning abilities of Large Language Models (LLMs)<n>Our analysis reveals that puzzling phenomena like aha moments", length-scaling'' and entropy dynamics are not disparate occurrences but hallmarks of an emergent reasoning hierarchy.
arXiv Detail & Related papers (2025-09-03T18:52:49Z) - Hyperproperty-Constrained Secure Reinforcement Learning [0.16385815610837165]
This paper focuses on HyperTWTL-constrained secure reinforcement learning (SecRL)<n>We propose an approach for learning security-aware optimal policies using dynamic Boltzmann softmax RL.
arXiv Detail & Related papers (2025-07-31T18:57:18Z) - Enhancing the Geometric Problem-Solving Ability of Multimodal LLMs via Symbolic-Neural Integration [57.95306827012784]
We propose GeoGen, a pipeline that can automatically generate step-wise reasoning paths for geometry diagrams.<n>By leveraging the precise symbolic reasoning, textbfGeoGen produces large-scale, high-quality question-answer pairs.<n>We train textbfGeoLogic, a Large Language Model (LLM), using synthetic data generated by GeoGen.
arXiv Detail & Related papers (2025-04-17T09:13:46Z) - Non-Deterministic Planning for Hyperproperty Verification [4.726777092009553]
We show that planning offers a powerful intermediate language for the automated verification of hyperproperties.
We present an algorithm that, given a HyperLTL verification problem, constructs a non-deterministic multi-agent planning instance.
We show that for large fragments of HyperLTL, the resulting planning instance corresponds to a classical, FOND, or POND planning problem.
arXiv Detail & Related papers (2024-05-22T09:57:49Z) - Monitoring Second-Order Hyperproperties [4.099848175176399]
We study the monitoring of complex hyperproperties at runtime.
We present the first monitoring algorithm for the much more expressive class of second-order hyperproperties.
We show that the monitoring of the second-order hyperproperties of Hyper$2$LTL$_f$ can be reduced to monitoring first-order hyperproperties.
arXiv Detail & Related papers (2024-04-15T10:33:39Z) - Hyper Strategy Logic [4.726777092009553]
Strategy logic (SL) is a powerful temporal logic that enables strategic reasoning in multi-agent systems.
We present Hyper Strategy Logic (HyperSL), a strategy logic where the outcome of multiple strategy profiles can be compared w.r.t. a hyperproperty.
We show that HyperSL can capture important properties that cannot be expressed in SL, including non-interference, quantitative Nash equilibria, optimal adversarial planning, and reasoning under imperfect information.
arXiv Detail & Related papers (2024-03-20T16:47:53Z) - On Alternating-Time Temporal Logic, Hyperproperties, and Strategy
Sharing [5.584060970507506]
We show that HyperATL$*_S$ is a rich specification language that captures important AI-related properties.
We implement our model-checking algorithm in a tool we call HyMASMC and evaluate it on a range of benchmarks.
arXiv Detail & Related papers (2023-12-19T18:37:39Z) - SemiNLL: A Framework of Noisy-Label Learning by Semi-Supervised Learning [58.26384597768118]
SemiNLL is a versatile framework that combines SS strategies and SSL models in an end-to-end manner.
Our framework can absorb various SS strategies and SSL backbones, utilizing their power to achieve promising performance.
arXiv Detail & Related papers (2020-12-02T01:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.