CounterExample Guided Neural Synthesis
- URL: http://arxiv.org/abs/2001.09245v1
- Date: Sat, 25 Jan 2020 01:11:53 GMT
- Title: CounterExample Guided Neural Synthesis
- Authors: Elizabeth Polgreen, Ralph Abboud, Daniel Kroening
- Abstract summary: Program synthesis is difficult and methods that provide formal guarantees suffer from scalability issues.
We combine neural networks with formal reasoning to convert a logical specification into a sequence of examples that guides the neural network towards a correct solution.
Our results show that the formal reasoning based guidance improves the performance of the neural network substantially.
- Score: 12.742347465894586
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Program synthesis is the generation of a program from a specification.
Correct synthesis is difficult, and methods that provide formal guarantees
suffer from scalability issues. On the other hand, neural networks are able to
generate programs from examples quickly but are unable to guarantee that the
program they output actually meets the logical specification. In this work we
combine neural networks with formal reasoning: using the latter to convert a
logical specification into a sequence of examples that guides the neural
network towards a correct solution, and to guarantee that any solution returned
satisfies the formal specification. We apply our technique to synthesising loop
invariants and compare the performance to existing solvers that use SMT and
existing techniques that use neural networks. Our results show that the formal
reasoning based guidance improves the performance of the neural network
substantially, nearly doubling the number of benchmarks it can solve.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - Return of the RNN: Residual Recurrent Networks for Invertible Sentence
Embeddings [0.0]
This study presents a novel model for invertible sentence embeddings using a residual recurrent network trained on an unsupervised encoding task.
Rather than the probabilistic outputs common to neural machine translation models, our approach employs a regression-based output layer to reconstruct the input sequence's word vectors.
The model achieves high accuracy and fast training with the ADAM, a significant finding given that RNNs typically require memory units, such as LSTMs, or second-order optimization methods.
arXiv Detail & Related papers (2023-03-23T15:59:06Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - Neural Combinatorial Logic Circuit Synthesis from Input-Output Examples [10.482805367361818]
We propose a novel, fully explainable neural approach to synthesis of logic circuits from input-output examples.
Our method can be employed for a virtually arbitrary choice of atoms.
arXiv Detail & Related papers (2022-10-29T14:06:42Z) - Lifted Bregman Training of Neural Networks [28.03724379169264]
We introduce a novel mathematical formulation for the training of feed-forward neural networks with (potentially non-smooth) proximal maps as activation functions.
This formulation is based on Bregman and a key advantage is that its partial derivatives with respect to the network's parameters do not require the computation of derivatives of the network's activation functions.
We present several numerical results that demonstrate that these training approaches can be equally well or even better suited for the training of neural network-based classifiers and (denoising) autoencoders with sparse coding.
arXiv Detail & Related papers (2022-08-18T11:12:52Z) - Toward Neural-Network-Guided Program Synthesis and Verification [26.706421573322952]
We propose a novel framework of program and invariant synthesis called neural network-guided synthesis.
We first show that, by designing and training neural networks, we can extract logical formulas over integers from the weights and biases of the trained neural networks.
Based on the idea, we have implemented a tool to synthesize formulas from positive/negative examples and implication constraints.
arXiv Detail & Related papers (2021-03-17T03:09:05Z) - Formal Synthesis of Lyapunov Neural Networks [61.79595926825511]
We propose an automatic and formally sound method for synthesising Lyapunov functions.
We employ a counterexample-guided approach where a numerical learner and a symbolic verifier interact to construct provably correct Lyapunov neural networks.
Our method synthesises Lyapunov functions faster and over wider spatial domains than the alternatives, yet providing stronger or equal guarantees.
arXiv Detail & Related papers (2020-03-19T17:21:02Z) - Towards Neural-Guided Program Synthesis for Linear Temporal Logic
Specifications [26.547133495699093]
We use a neural network to learn a Q-function that is then used to guide search, and to construct programs that are subsequently verified for correctness.
Our method is unique in combining search with deep learning to realize synthesis.
arXiv Detail & Related papers (2019-12-31T17:09:49Z) - Synthetic Datasets for Neural Program Synthesis [66.20924952964117]
We propose a new methodology for controlling and evaluating the bias of synthetic data distributions over both programs and specifications.
We demonstrate, using the Karel DSL and a small Calculator DSL, that training deep networks on these distributions leads to improved cross-distribution generalization performance.
arXiv Detail & Related papers (2019-12-27T21:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.