PLANS: Robust Program Learning from Neurally Inferred Specifications
- URL: http://arxiv.org/abs/2006.03312v1
- Date: Fri, 5 Jun 2020 08:51:34 GMT
- Title: PLANS: Robust Program Learning from Neurally Inferred Specifications
- Authors: Rapha\"el Dang-Nhu
- Abstract summary: Rule-based approaches offer correctness guarantees in an unsupervised way, while neural models are more realistically scalable to raw, high-dimensional input.
We introduce PLANS, a hybrid model for program synthesis from visual observations.
We obtain state-of-the-art performance at program synthesis from diverse demonstration videos in the Karel and ViZDoom environments.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have seen the rise of statistical program learning based on
neural models as an alternative to traditional rule-based systems for
programming by example. Rule-based approaches offer correctness guarantees in
an unsupervised way as they inherently capture logical rules, while neural
models are more realistically scalable to raw, high-dimensional input, and
provide resistance to noisy I/O specifications. We introduce PLANS (Program
LeArning from Neurally inferred Specifications), a hybrid model for program
synthesis from visual observations that gets the best of both worlds, relying
on (i) a neural architecture trained to extract abstract, high-level
information from each raw individual input (ii) a rule-based system using the
extracted information as I/O specifications to synthesize a program capturing
the different observations. In order to address the key challenge of making
PLANS resistant to noise in the network's output, we introduce a filtering
heuristic for I/O specifications based on selective classification techniques.
We obtain state-of-the-art performance at program synthesis from diverse
demonstration videos in the Karel and ViZDoom environments, while requiring no
ground-truth program for training. We make our implementation available at
github.com/rdang-nhu/PLANS.
Related papers
- The Graph's Apprentice: Teaching an LLM Low Level Knowledge for Circuit Quality Estimation [34.37154877681809]
We introduce VeriDistill, the first end-to-end machine learning model that directly processes raw Verilog code to predict circuit quality-of-result metrics.
Our model employs a novel knowledge distillation method, transferring low-level circuit insights via graphs into the predictor based on LLM.
Experiments show VeriDistill outperforms state-of-the-art baselines on large-scale Verilog datasets.
arXiv Detail & Related papers (2024-10-30T04:20:10Z) - AI-Aided Kalman Filters [65.35350122917914]
The Kalman filter (KF) and its variants are among the most celebrated algorithms in signal processing.
Recent developments illustrate the possibility of fusing deep neural networks (DNNs) with classic Kalman-type filtering.
This article provides a tutorial-style overview of design approaches for incorporating AI in aiding KF-type algorithms.
arXiv Detail & Related papers (2024-10-16T06:47:53Z) - CodeRL: Mastering Code Generation through Pretrained Models and Deep
Reinforcement Learning [92.36705236706678]
"CodeRL" is a new framework for program synthesis tasks through pretrained LMs and deep reinforcement learning.
During inference, we introduce a new generation procedure with a critical sampling strategy.
For the model backbones, we extended the encoder-decoder architecture of CodeT5 with enhanced learning objectives.
arXiv Detail & Related papers (2022-07-05T02:42:15Z) - BF++: a language for general-purpose program synthesis [0.483420384410068]
Most state of the art decision systems based on Reinforcement Learning (RL) are data-driven black-box neural models.
We propose a new programming language, BF++, designed specifically for automatic programming of agents in a Partially Observable Markov Decision Process setting.
arXiv Detail & Related papers (2021-01-23T19:44:44Z) - NSL: Hybrid Interpretable Learning From Noisy Raw Data [66.15862011405882]
This paper introduces a hybrid neural-symbolic learning framework, called NSL, that learns interpretable rules from labelled unstructured data.
NSL combines pre-trained neural networks for feature extraction with FastLAS, a state-of-the-art ILP system for rule learning under the answer set semantics.
We demonstrate that NSL is able to learn robust rules from MNIST data and achieve comparable or superior accuracy when compared to neural network and random forest baselines.
arXiv Detail & Related papers (2020-12-09T13:02:44Z) - Learning to Execute Programs with Instruction Pointer Attention Graph
Neural Networks [55.98291376393561]
Graph neural networks (GNNs) have emerged as a powerful tool for learning software engineering tasks.
Recurrent neural networks (RNNs) are well-suited to long sequential chains of reasoning, but they do not naturally incorporate program structure.
We introduce a novel GNN architecture, the Instruction Pointer Attention Graph Neural Networks (IPA-GNN), which improves systematic generalization on the task of learning to execute programs.
arXiv Detail & Related papers (2020-10-23T19:12:30Z) - Optimal Neural Program Synthesis from Multimodal Specifications [45.35689345004124]
Multimodal program synthesis is an attractive way to scale program synthesis to challenging settings.
This paper proposes an optimal neural synthesis approach where the goal is to find a program that satisfies user-provided constraints.
arXiv Detail & Related papers (2020-10-04T20:51:21Z) - Type-driven Neural Programming by Example [0.0]
We look into programming by example (PBE), which is about finding a program mapping given inputs to given outputs.
We propose a way to incorporate programming types into a neural program synthesis approach for PBE.
arXiv Detail & Related papers (2020-08-28T12:30:05Z) - Learning to learn generative programs with Memoised Wake-Sleep [52.439550543743536]
We study a class of neuro-symbolic generative models in which neural networks are used both for inference and as priors over symbolic, data-generating programs.
We propose the Memoised Wake-Sleep (MWS) algorithm, which extends Wake Sleep by explicitly storing and reusing the best programs discovered by the inference network throughout training.
arXiv Detail & Related papers (2020-07-06T23:51:03Z) - Synthetic Datasets for Neural Program Synthesis [66.20924952964117]
We propose a new methodology for controlling and evaluating the bias of synthetic data distributions over both programs and specifications.
We demonstrate, using the Karel DSL and a small Calculator DSL, that training deep networks on these distributions leads to improved cross-distribution generalization performance.
arXiv Detail & Related papers (2019-12-27T21:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.