LTLf Synthesis Under Unreliable Input
- URL: http://arxiv.org/abs/2412.14728v1
- Date: Thu, 19 Dec 2024 10:54:17 GMT
- Title: LTLf Synthesis Under Unreliable Input
- Authors: Christian Hagemeier, Giuseppe de Giacomo, Moshe Y. Vardi,
- Abstract summary: We study the problem of realizing strategies for anf goal specification while ensuring that at least anf backup specification is satisfied in case of unreliable input variables.
We devise three different solution techniques: one based on direct automata manipulation, which is 2EXPTIME, one disregarding unreliable input variables by adopting a belief construction, which is 3EXPTIME, and one leveraging second-order quantifiedf (QLTLf)
We prove their correctness and evaluate them against each other empirically. Interestingly, theoretical worst-case bounds do not translate into observed performance; the MSO technique performs best, followed by belief construction and direct automata manipulation
- Score: 36.04060143603635
- License:
- Abstract: We study the problem of realizing strategies for an LTLf goal specification while ensuring that at least an LTLf backup specification is satisfied in case of unreliability of certain input variables. We formally define the problem and characterize its worst-case complexity as 2EXPTIME-complete, like standard LTLf synthesis. Then we devise three different solution techniques: one based on direct automata manipulation, which is 2EXPTIME, one disregarding unreliable input variables by adopting a belief construction, which is 3EXPTIME, and one leveraging second-order quantified LTLf (QLTLf), which is 2EXPTIME and allows for a direct encoding into monadic second-order logic, which in turn is worst-case nonelementary. We prove their correctness and evaluate them against each other empirically. Interestingly, theoretical worst-case bounds do not translate into observed performance; the MSO technique performs best, followed by belief construction and direct automata manipulation. As a byproduct of our study, we provide a general synthesis procedure for arbitrary QLTLf specifications.
Related papers
- LTLf+ and PPLTL+: Extending LTLf and PPLTL to Infinite Traces [44.35335751462176]
DFA-based synthesis techniques for automatf+/PPLTL+ are presented.
We show that the full expressive power of PPLTL+ is EXPTIME-complete instead of 2EXPTIME-complete.
arXiv Detail & Related papers (2024-11-14T11:17:06Z) - On-the-fly Synthesis for LTL over Finite Traces: An Efficient Approach that Counts [20.14001970300658]
We present an on-the-fly framework for Linear Temporal Logic over finite traces (LTLf) based on top-down deterministic automata construction.
arXiv Detail & Related papers (2024-08-14T06:52:58Z) - Predictable and Performant Reactive Synthesis Modulo Theories via Functional Synthesis [1.1797787239802762]
We show how to generate correct controllers from temporal logic specifications using classical reactive handles (propositional) as a specification language.
We also show that our method allows responses in the sense that the controller can optimize its outputs in order to always provide the smallest safe values.
arXiv Detail & Related papers (2024-07-12T15:23:27Z) - LINC: A Neurosymbolic Approach for Logical Reasoning by Combining
Language Models with First-Order Logic Provers [60.009969929857704]
Logical reasoning is an important task for artificial intelligence with potential impacts on science, mathematics, and society.
In this work, we reformulating such tasks as modular neurosymbolic programming, which we call LINC.
We observe significant performance gains on FOLIO and a balanced subset of ProofWriter for three different models in nearly all experimental conditions we evaluate.
arXiv Detail & Related papers (2023-10-23T17:58:40Z) - Model Checking Strategies from Synthesis Over Finite Traces [25.871354900295056]
For model checking, two types of transducers are fundamentally different.
We show that for model checking of non-terminating transducers is emphexponentially harder than that of terminating transducers.
arXiv Detail & Related papers (2023-05-15T03:09:20Z) - Confident Adaptive Language Modeling [95.45272377648773]
CALM is a framework for dynamically allocating different amounts of compute per input and generation timestep.
We demonstrate the efficacy of our framework in reducing compute -- potential speedup of up to $times 3$ -- while provably maintaining high performance.
arXiv Detail & Related papers (2022-07-14T17:00:19Z) - Linear Temporal Logic Modulo Theories over Finite Traces (Extended
Version) [72.38188258853155]
This paper studies Linear Temporal Logic over Finite Traces (LTLf)
proposition letters are replaced with first-order formulas interpreted over arbitrary theories.
The resulting logic, called Satisfiability Modulo Theories (LTLfMT), is semi-decidable.
arXiv Detail & Related papers (2022-04-28T17:57:33Z) - Learning Bounded Context-Free-Grammar via LSTM and the
Transformer:Difference and Explanations [51.77000472945441]
Long Short-Term Memory (LSTM) and Transformers are two popular neural architectures used for natural language processing tasks.
In practice, it is often observed that Transformer models have better representation power than LSTM.
We study such practical differences between LSTM and Transformer and propose an explanation based on their latent space decomposition patterns.
arXiv Detail & Related papers (2021-12-16T19:56:44Z) - LTLf Synthesis on Probabilistic Systems [0.0]
synthesis is used to find a policy that maximizes the probability of achieving this behavior.
No tools exist to solve policy synthesis for behaviors given finite-trace properties.
We present two algorithms for solving this problem: the first via reduction of Markov Processes and the second native tools for automatonf.
arXiv Detail & Related papers (2020-09-23T01:26:47Z) - FastLR: Non-Autoregressive Lipreading Model with Integrate-and-Fire [74.04394069262108]
We propose FastLR, a non-autoregressive (NAR) lipreading model which generates all target tokens simultaneously.
FastLR achieves the speedup up to 10.97$times$ compared with state-of-the-art lipreading model.
arXiv Detail & Related papers (2020-08-06T08:28:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.