BF++: a language for general-purpose program synthesis
- URL: http://arxiv.org/abs/2101.09571v3
- Date: Thu, 18 Feb 2021 20:24:02 GMT
- Title: BF++: a language for general-purpose program synthesis
- Authors: Vadim Liventsev, Aki H\"arm\"a and Milan Petkovi\'c
- Abstract summary: Most state of the art decision systems based on Reinforcement Learning (RL) are data-driven black-box neural models.
We propose a new programming language, BF++, designed specifically for automatic programming of agents in a Partially Observable Markov Decision Process setting.
- Score: 0.483420384410068
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most state of the art decision systems based on Reinforcement Learning (RL)
are data-driven black-box neural models, where it is often difficult to
incorporate expert knowledge into the models or let experts review and validate
the learned decision mechanisms. Knowledge-insertion and model review are
important requirements in many applications involving human health and safety.
One way to bridge the gap between data and knowledge driven systems is program
synthesis: replacing a neural network that outputs decisions with a symbolic
program generated by a neural network or by means of genetic programming. We
propose a new programming language, BF++, designed specifically for automatic
programming of agents in a Partially Observable Markov Decision Process (POMDP)
setting and apply neural program synthesis to solve standard OpenAI Gym
benchmarks.
Related papers
- Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning [54.56905063752427]
Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI systems.
Existing pipelines that train the neural and symbolic components sequentially require extensive labelling.
New architecture, NeSyGPT, fine-tunes a vision-language foundation model to extract symbolic features from raw data.
arXiv Detail & Related papers (2024-02-02T20:33:14Z) - COOL: A Constraint Object-Oriented Logic Programming Language and its
Neural-Symbolic Compilation System [0.0]
We introduce the COOL programming language, which seamlessly combines logical reasoning with neural network technologies.
COOL is engineered to autonomously handle data collection, mitigating the need for user-supplied initial data.
It incorporates user prompts into the coding process to reduce the risks of undertraining and enhances the interaction among models throughout their lifecycle.
arXiv Detail & Related papers (2023-11-07T06:29:59Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - MRKL Systems: A modular, neuro-symbolic architecture that combines large
language models, external knowledge sources and discrete reasoning [50.40151403246205]
Huge language models (LMs) have ushered in a new era for AI, serving as a gateway to natural-language-based knowledge tasks.
We define a flexible architecture with multiple neural models, complemented by discrete knowledge and reasoning modules.
We describe this neuro-symbolic architecture, dubbed the Modular Reasoning, Knowledge and Language (MRKL) system.
arXiv Detail & Related papers (2022-05-01T11:01:28Z) - A Conversational Paradigm for Program Synthesis [110.94409515865867]
We propose a conversational program synthesis approach via large language models.
We train a family of large language models, called CodeGen, on natural language and programming language data.
Our findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm.
arXiv Detail & Related papers (2022-03-25T06:55:15Z) - Understanding Neural Code Intelligence Through Program Simplification [3.9704927572880253]
We propose a model-agnostic approach to identify critical input features for models in code intelligence systems.
Our approach, SIVAND, uses simplification techniques that reduce the size of input programs of a CI model.
We believe that SIVAND's extracted features may help understand neural CI systems' predictions and learned behavior.
arXiv Detail & Related papers (2021-06-07T05:44:29Z) - Neurogenetic Programming Framework for Explainable Reinforcement
Learning [0.483420384410068]
We propose a novel method that combines both approaches using a concept of a virtual neuro-genetic programmer.
We demonstrate its ability to provide performant and explainable solutions for various OpenAI Gym tasks, as well as inject expert knowledge into the otherwise data-driven search for solutions.
arXiv Detail & Related papers (2021-02-08T14:26:02Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - On the Generalizability of Neural Program Models with respect to
Semantic-Preserving Program Transformations [25.96895574298886]
We evaluate the generalizability of neural program models with respect to semantic-preserving transformations.
We use three Java datasets of different sizes and three state-of-the-art neural network models for code.
Our results suggest that neural program models based on data and control dependencies in programs generalize better than neural program models based only on abstract syntax trees.
arXiv Detail & Related papers (2020-07-31T20:39:20Z) - PLANS: Robust Program Learning from Neurally Inferred Specifications [0.0]
Rule-based approaches offer correctness guarantees in an unsupervised way, while neural models are more realistically scalable to raw, high-dimensional input.
We introduce PLANS, a hybrid model for program synthesis from visual observations.
We obtain state-of-the-art performance at program synthesis from diverse demonstration videos in the Karel and ViZDoom environments.
arXiv Detail & Related papers (2020-06-05T08:51:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.