Towards Neural-Guided Program Synthesis for Linear Temporal Logic
Specifications
- URL: http://arxiv.org/abs/1912.13430v1
- Date: Tue, 31 Dec 2019 17:09:49 GMT
- Title: Towards Neural-Guided Program Synthesis for Linear Temporal Logic
Specifications
- Authors: Alberto Camacho, Sheila A. McIlraith
- Abstract summary: We use a neural network to learn a Q-function that is then used to guide search, and to construct programs that are subsequently verified for correctness.
Our method is unique in combining search with deep learning to realize synthesis.
- Score: 26.547133495699093
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Synthesizing a program that realizes a logical specification is a classical
problem in computer science. We examine a particular type of program synthesis,
where the objective is to synthesize a strategy that reacts to a potentially
adversarial environment while ensuring that all executions satisfy a Linear
Temporal Logic (LTL) specification. Unfortunately, exact methods to solve
so-called LTL synthesis via logical inference do not scale. In this work, we
cast LTL synthesis as an optimization problem. We employ a neural network to
learn a Q-function that is then used to guide search, and to construct programs
that are subsequently verified for correctness. Our method is unique in
combining search with deep learning to realize LTL synthesis. In our
experiments the learned Q-function provides effective guidance for synthesis
problems with relatively small specifications.
Related papers
- Predictable and Performant Reactive Synthesis Modulo Theories via Functional Synthesis [1.1797787239802762]
We show how to generate correct controllers from temporal logic specifications using classical reactive handles (propositional) as a specification language.
We also show that our method allows responses in the sense that the controller can optimize its outputs in order to always provide the smallest safe values.
arXiv Detail & Related papers (2024-07-12T15:23:27Z) - Guiding Enumerative Program Synthesis with Large Language Models [15.500250058226474]
In this paper, we evaluate the abilities of Large Language Models to solve formal synthesis benchmarks.
When one-shot synthesis fails, we propose a novel enumerative synthesis algorithm.
We find that GPT-3.5 as a stand-alone tool for formal synthesis is easily outperformed by state-of-the-art formal synthesis algorithms.
arXiv Detail & Related papers (2024-03-06T19:13:53Z) - Synthesizing Efficiently Monitorable Formulas in Metric Temporal Logic [4.60607942851373]
We consider the problem of automatically synthesizing formal specifications from system executions.
Most of the classical approaches for synthesizing temporal logic formulas aim at minimizing the size of the formula.
We formalize this notion and devise a learning algorithm that synthesizes concise formulas having bounded lookahead.
arXiv Detail & Related papers (2023-10-26T14:13:15Z) - When Do Program-of-Thoughts Work for Reasoning? [51.2699797837818]
We propose complexity-impacted reasoning score (CIRS) to measure correlation between code and reasoning abilities.
Specifically, we use the abstract syntax tree to encode the structural information and calculate logical complexity.
Code will be integrated into the EasyInstruct framework at https://github.com/zjunlp/EasyInstruct.
arXiv Detail & Related papers (2023-08-29T17:22:39Z) - ExeDec: Execution Decomposition for Compositional Generalization in Neural Program Synthesis [54.18659323181771]
We characterize several different forms of compositional generalization that are desirable in program synthesis.
We propose ExeDec, a novel decomposition-based strategy that predicts execution subgoals to solve problems step-by-step informed by program execution at each step.
arXiv Detail & Related papers (2023-07-26T01:07:52Z) - Linear Temporal Logic Modulo Theories over Finite Traces (Extended
Version) [72.38188258853155]
This paper studies Linear Temporal Logic over Finite Traces (LTLf)
proposition letters are replaced with first-order formulas interpreted over arbitrary theories.
The resulting logic, called Satisfiability Modulo Theories (LTLfMT), is semi-decidable.
arXiv Detail & Related papers (2022-04-28T17:57:33Z) - Neural Circuit Synthesis from Specification Patterns [5.7923858184309385]
We train hierarchical Transformers on the task of synthesizing hardware circuits directly out of high-level logical specifications.
New approaches using machine learning might open a lot of possibilities in this area, but suffer from the lack of sufficient amounts of training data.
We show that hierarchical Transformers trained on this synthetic data solve a significant portion of problems from the synthesis competitions.
arXiv Detail & Related papers (2021-07-25T18:17:33Z) - Synthesize, Execute and Debug: Learning to Repair for Neural Program
Synthesis [81.54148730967394]
We propose SED, a neural program generation framework that incorporates synthesis, execution, and debug stages.
SED first produces initial programs using the neural program synthesizer component, then utilizes a neural program debugger to iteratively repair the generated programs.
On Karel, a challenging input-output program synthesis benchmark, SED reduces the error rate of the neural program synthesizer itself by a considerable margin, and outperforms the standard beam search for decoding.
arXiv Detail & Related papers (2020-07-16T04:15:47Z) - Physarum Powered Differentiable Linear Programming Layers and
Applications [48.77235931652611]
We propose an efficient and differentiable solver for general linear programming problems.
We show the use of our solver in a video segmentation task and meta-learning for few-shot learning.
arXiv Detail & Related papers (2020-04-30T01:50:37Z) - CounterExample Guided Neural Synthesis [12.742347465894586]
Program synthesis is difficult and methods that provide formal guarantees suffer from scalability issues.
We combine neural networks with formal reasoning to convert a logical specification into a sequence of examples that guides the neural network towards a correct solution.
Our results show that the formal reasoning based guidance improves the performance of the neural network substantially.
arXiv Detail & Related papers (2020-01-25T01:11:53Z) - Certified Reinforcement Learning with Logic Guidance [78.2286146954051]
We propose a model-free RL algorithm that enables the use of Linear Temporal Logic (LTL) to formulate a goal for unknown continuous-state/action Markov Decision Processes (MDPs)
The algorithm is guaranteed to synthesise a control policy whose traces satisfy the specification with maximal probability.
arXiv Detail & Related papers (2019-02-02T20:09:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.