Multi-Player Games with LDL Goals over Finite Traces
- URL: http://arxiv.org/abs/2008.05647v1
- Date: Thu, 13 Aug 2020 02:11:06 GMT
- Title: Multi-Player Games with LDL Goals over Finite Traces
- Authors: Julian Gutierrez and Giuseppe Perelli and Michael Wooldridge
- Abstract summary: Linear Dynamic Logic on finite traces LDLf is a powerful logic for reasoning about concurrent and multi-agent systems.
We investigate techniques for both the characterisation and verification of equilibria in multi-player games with goals/objectives expressed using LDLf.
- Score: 5.0082351824883045
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Linear Dynamic Logic on finite traces LDLf is a powerful logic for reasoning
about the behaviour of concurrent and multi-agent systems.
In this paper, we investigate techniques for both the characterisation and
verification of equilibria in multi-player games with goals/objectives
expressed using logics based on LDLf. This study builds upon a generalisation
of Boolean games, a logic-based game model of multi-agent systems where players
have goals succinctly represented in a logical way.
Because LDLf goals are considered, in the settings we study -- Reactive
Modules games and iterated Boolean games with goals over finite traces --
players' goals can be defined to be regular properties while achieved in a
finite, but arbitrarily large, trace.
In particular, using alternating automata, the paper investigates
automata-theoretic approaches to the characterisation and verification of (pure
strategy Nash) equilibria, shows that the set of Nash equilibria in
multi-player games with LDLf objectives is regular, and provides complexity
results for the associated automata constructions.
Related papers
- LogicGame: Benchmarking Rule-Based Reasoning Abilities of Large Language Models [87.49676980090555]
Large Language Models (LLMs) have demonstrated notable capabilities across various tasks, showcasing complex problem-solving abilities.
We introduce LogicGame, a novel benchmark designed to evaluate the comprehensive rule understanding, execution, and planning capabilities of LLMs.
arXiv Detail & Related papers (2024-08-28T13:16:41Z) - Large Language Models Playing Mixed Strategy Nash Equilibrium Games [1.060608983034705]
This paper focuses on the capabilities of Large Language Models to find the Nash equilibrium in games with a mixed strategy Nash equilibrium and no pure strategy Nash equilibrium.
The study reveals a significant enhancement in the performance of LLMs when they are equipped with the possibility to run code.
It is evident that while LLMs exhibit remarkable proficiency in well-known standard games, their performance dwindles when faced with slight modifications of the same games.
arXiv Detail & Related papers (2024-06-15T09:30:20Z) - GTBench: Uncovering the Strategic Reasoning Limitations of LLMs via Game-Theoretic Evaluations [87.99872683336395]
Large Language Models (LLMs) are integrated into critical real-world applications.
This paper evaluates LLMs' reasoning abilities in competitive environments.
We first propose GTBench, a language-driven environment composing 10 widely recognized tasks.
arXiv Detail & Related papers (2024-02-19T18:23:36Z) - Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and Improving LLMs [87.34281749422756]
Large language models (LLMs) have achieved impressive human-like performance across various reasoning tasks.
However, their mastery of underlying inferential rules still falls short of human capabilities.
We propose a logic scaffolding inferential rule generation framework, to construct an inferential rule base, ULogic.
arXiv Detail & Related papers (2024-02-18T03:38:51Z) - Exploring Self-supervised Logic-enhanced Training for Large Language Models [59.227222647741094]
In this paper, we make the first attempt to investigate the feasibility of incorporating logical knowledge through self-supervised post-training.
We devise an auto-regressive objective variant of MERIt and integrate it with two LLM series, i.e., FLAN-T5 and LLaMA, with parameter size ranging from 3 billion to 13 billion.
The results on two challenging logical reasoning benchmarks demonstrate the effectiveness of LogicLLM.
arXiv Detail & Related papers (2023-05-23T06:13:10Z) - A unified stochastic approximation framework for learning in games [82.74514886461257]
We develop a flexible approximation framework for analyzing the long-run behavior of learning in games (both continuous and finite)
The proposed analysis template incorporates a wide array of popular learning algorithms, including gradient-based methods, exponential/multiplicative weights for learning in finite games, optimistic and bandit variants of the above, etc.
arXiv Detail & Related papers (2022-06-08T14:30:38Z) - Equilibria for Games with Combined Qualitative and Quantitative
Objectives [15.590197778287616]
We study concurrent games in which each player is a process that is assumed to act independently and strategically.
Our main result is that deciding the existence of a strict epsilon Nash equilibrium in such games is 2ExpTime-complete.
arXiv Detail & Related papers (2020-08-13T01:56:24Z) - Automated Temporal Equilibrium Analysis: Verification and Synthesis of
Multi-Player Games [5.230352342979224]
In multi-agent systems, the rational verification problem is concerned with checking which temporal logic properties will hold in a system.
We present a technique to reduce the rational verification problem to the solution of a collection of parity games.
arXiv Detail & Related papers (2020-08-13T01:43:31Z) - Certified Reinforcement Learning with Logic Guidance [78.2286146954051]
We propose a model-free RL algorithm that enables the use of Linear Temporal Logic (LTL) to formulate a goal for unknown continuous-state/action Markov Decision Processes (MDPs)
The algorithm is guaranteed to synthesise a control policy whose traces satisfy the specification with maximal probability.
arXiv Detail & Related papers (2019-02-02T20:09:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.