Incremental LTLf Synthesis
- URL: http://arxiv.org/abs/2603.01201v1
- Date: Sun, 01 Mar 2026 17:42:03 GMT
- Title: Incremental LTLf Synthesis
- Authors: Giuseppe De Giacomo, Yves Lespérance, Gianmarco Parretti, Fabio Patrizi, Moshe Y. Vardi,
- Abstract summary: We study incrementalf synthesis -- a form of reactive synthesis where the goals are given incrementally while in execution.<n>We propose a solution technique that efficiently performs incremental synthesis for multiplef goals.<n>We also consider an alternative solution technique based on auxiliaryf formula progression.
- Score: 33.806490809038415
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we study incremental LTLf synthesis -- a form of reactive synthesis where the goals are given incrementally while in execution. In other words, the protagonist agent is already executing a strategy for a certain goal when it receives a new goal: at this point, the agent has to abandon the current strategy and synthesize a new strategy still fulfilling the original goal, which was given at the beginning, as well as the new goal, starting from the current instant. In this paper, we formally define the problem of incremental synthesis and study its solution. We propose a solution technique that efficiently performs incremental synthesis for multiple LTLf goals by leveraging auxiliary data structures constructed during automata-based synthesis. We also consider an alternative solution technique based on LTLf formula progression. We show that, in spite of the fact that formula progression can generate formulas that are exponentially larger than the original ones, their minimal automata remain bounded in size by that of the original formula. On the other hand, we show experimentally that, if implemented naively, i.e., by actually computing the automaton of the progressed LTLf formulas from scratch every time a new goal arrives, the solution based on formula progression is not competitive.
Related papers
- Semantically Labelled Automata for Multi-Task Reinforcement Learning with LTL Instructions [61.479946958462754]
We study multi-task reinforcement learning (RL), a setting in which an agent learns a single, universal policy.<n>We present a novel task embedding technique leveraging a new generation of semantic translations-to-automata.
arXiv Detail & Related papers (2026-02-06T14:46:27Z) - Unlocking Symbol-Level Precoding Efficiency Through Tensor Equivariant Neural Network [84.22115118596741]
We propose an end-to-end deep learning (DL) framework with low inference complexity for symbol-level precoding.<n>We show that the proposed framework captures substantial performance gains of optimal SLP, while achieving an approximately 80-times speedup over conventional methods.
arXiv Detail & Related papers (2025-10-02T15:15:50Z) - Directed Exploration in Reinforcement Learning from Linear Temporal Logic [59.707408697394534]
Linear temporal logic (LTL) is a powerful language for task specification in reinforcement learning.<n>We show that the synthesized reward signal remains fundamentally sparse, making exploration challenging.<n>We show how better exploration can be achieved by further leveraging the specification and casting its corresponding Limit Deterministic B"uchi Automaton (LDBA) as a Markov reward process.
arXiv Detail & Related papers (2024-08-18T14:25:44Z) - On-the-fly Synthesis for LTL over Finite Traces: An Efficient Approach that Counts [20.14001970300658]
We present an on-the-fly framework for Linear Temporal Logic over finite traces (LTLf) based on top-down deterministic automata construction.
arXiv Detail & Related papers (2024-08-14T06:52:58Z) - Double-Ended Synthesis Planning with Goal-Constrained Bidirectional Search [27.09693306892583]
We present a formulation of synthesis planning with starting material constraints.
We propose Double-Ended Synthesis Planning (DESP), a novel CASP algorithm under a bidirectional graph search scheme.
DESP can make use of existing one-step retrosynthesis models, and we anticipate its performance to scale as these one-step model capabilities improve.
arXiv Detail & Related papers (2024-07-08T18:56:00Z) - SynthesizRR: Generating Diverse Datasets with Retrieval Augmentation [55.2480439325792]
We study the synthesis of six datasets, covering topic classification, sentiment analysis, tone detection, and humor.
We find that SynthesizRR greatly improves lexical and semantic diversity, similarity to human-written text, and distillation performance.
arXiv Detail & Related papers (2024-05-16T12:22:41Z) - ExeDec: Execution Decomposition for Compositional Generalization in Neural Program Synthesis [54.18659323181771]
We characterize several different forms of compositional generalization that are desirable in program synthesis.
We propose ExeDec, a novel decomposition-based strategy that predicts execution subgoals to solve problems step-by-step informed by program execution at each step.
arXiv Detail & Related papers (2023-07-26T01:07:52Z) - Forward LTLf Synthesis: DPLL At Work [1.370633147306388]
We propose a new AND-OR graph search framework for synthesis of Linear Temporal Logic on finite traces (LTLf)
Within such framework, we devise a procedure inspired by the Davis-Putnam-Logemann-Loveland (DPLL) algorithm to generate the next available agent-environment moves.
We also propose a novel equivalence check for search nodes based on syntactic equivalence of state formulas.
arXiv Detail & Related papers (2023-02-27T14:33:50Z) - Synthesis from Satisficing and Temporal Goals [21.14507575500482]
An existing approach combines synthesis techniques from discounted synthesis with optimization for the DS rewards but has failed to yield a sound algorithm.
An alternative approach combining synthesis with satisficing DS rewards (rewards that achieve a threshold) is sound and complete for integer discount factors, but, in practice, a fractional discount factor is desired.
This work extends the existing satising approach to the first sound algorithm for synthesis from presenting DS rewards with fractional discount factors.
arXiv Detail & Related papers (2022-05-20T23:46:31Z) - Distributed Control using Reinforcement Learning with
Temporal-Logic-Based Reward Shaping [0.2320417845168326]
We present a framework for synthesis of distributed control strategies for a heterogeneous team of robots in a partially observable environment.
Our approach formulates the synthesis problem as a game and employs a policy graph method to find a control strategy with memory for each agent.
We use the quantitative semantics of TLTL as the reward of the game, and further reshape it using the finite state automaton to guide and accelerate the learning process.
arXiv Detail & Related papers (2022-03-08T16:03:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.