NP4G : Network Programming for Generalization
- URL: http://arxiv.org/abs/2212.11118v1
- Date: Thu, 8 Dec 2022 06:18:44 GMT
- Title: NP4G : Network Programming for Generalization
- Authors: Shoichiro Hara, Yuji Watanabe
- Abstract summary: We propose NP4G: Network Programming for Generalization, which can automatically generate programs by inductive inference.
As an example, we show the bitwise NOT operation programs are acquired in a comparatively short time and at a rate of about 7 in 10 running.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic programming has been actively studied for a long time by various
approaches including genetic programming. In recent years, automatic
programming using neural networks such as GPT-3 has been actively studied and
is attracting a lot of attention. However, these methods are illogical
inference based on experience by enormous learning, and their thinking process
is unclear. Even using the method by logical inference with a clear thinking
process, the system that automatically generates any programs has not yet been
realized. Especially, the inductive inference generalized by logical inference
from one example is an important issue that the artificial intelligence can
acquire knowledge by itself. In this study, we propose NP4G: Network
Programming for Generalization, which can automatically generate programs by
inductive inference. Because the proposed method can realize "sequence",
"selection", and "iteration" in programming and can satisfy the conditions of
the structured program theorem, it is expected that NP4G is a method
automatically acquire any programs by inductive inference. As an example, we
automatically construct a bitwise NOT operation program from several training
data by generalization using NP4G. Although NP4G only randomly selects and
connects nodes, by adjusting the number of nodes and the number of phase of
"Phased Learning", we show the bitwise NOT operation programs are acquired in a
comparatively short time and at a rate of about 7 in 10 running. The source
code of NP4G is available on GitHub as a public repository.
Related papers
- Genetic Auto-prompt Learning for Pre-trained Code Intelligence Language Models [54.58108387797138]
We investigate the effectiveness of prompt learning in code intelligence tasks.
Existing automatic prompt design methods are very limited to code intelligence tasks.
We propose Genetic Auto Prompt (GenAP) which utilizes an elaborate genetic algorithm to automatically design prompts.
arXiv Detail & Related papers (2024-03-20T13:37:00Z) - Opening the AI black box: program synthesis via mechanistic
interpretability [12.849101734204456]
We present a novel method for program synthesis based on automated mechanistic interpretability of neural networks trained to perform the desired task, auto-distilling the learned algorithm into Python code.
We test MIPS on a benchmark of 62 algorithmic tasks that can be learned by an RNN and find it highly complementary to GPT-4.
As opposed to large language models, this program synthesis technique makes no use of (and is therefore not limited by) human training data such as algorithms and code from GitHub.
arXiv Detail & Related papers (2024-02-07T18:59:12Z) - The Clock and the Pizza: Two Stories in Mechanistic Explanation of
Neural Networks [59.26515696183751]
We show that algorithm discovery in neural networks is sometimes more complex.
We show that even simple learning problems can admit a surprising diversity of solutions.
arXiv Detail & Related papers (2023-06-30T17:59:13Z) - A Neural Lambda Calculus: Neurosymbolic AI meets the foundations of
computing and functional programming [0.0]
We will analyze the ability of neural networks to learn how to execute programs as a whole.
We will introduce the use of integrated neural learning and calculi formalization.
arXiv Detail & Related papers (2023-04-18T20:30:16Z) - Graph Neural Networks are Dynamic Programmers [0.0]
Graph neural networks (GNNs) are claimed to align with dynamic programming (DP)
Here we show, using methods from theory and abstract algebra, that there exists an intricate connection between GNNs and DP.
arXiv Detail & Related papers (2022-03-29T13:27:28Z) - Waypoint Planning Networks [66.72790309889432]
We propose a hybrid algorithm based on LSTMs with a local kernel - a classic algorithm such as A*, and a global kernel using a learned algorithm.
We compare WPN against A*, as well as related works including motion planning networks (MPNet) and value networks (VIN)
It is shown that WPN's search space is considerably less than A*, while being able to generate near optimal results.
arXiv Detail & Related papers (2021-05-01T18:02:01Z) - Inductive logic programming at 30 [22.482292439881192]
Inductive logic programming (ILP) is a form of logic-based machine learning.
We focus on (i) new meta-level search methods, (ii) new approaches for predicate invention, and (iv) the use of different technologies.
We conclude by discussing some of the current limitations of ILP and discuss directions for future research.
arXiv Detail & Related papers (2021-02-21T08:37:17Z) - Learning to Execute Programs with Instruction Pointer Attention Graph
Neural Networks [55.98291376393561]
Graph neural networks (GNNs) have emerged as a powerful tool for learning software engineering tasks.
Recurrent neural networks (RNNs) are well-suited to long sequential chains of reasoning, but they do not naturally incorporate program structure.
We introduce a novel GNN architecture, the Instruction Pointer Attention Graph Neural Networks (IPA-GNN), which improves systematic generalization on the task of learning to execute programs.
arXiv Detail & Related papers (2020-10-23T19:12:30Z) - Strong Generalization and Efficiency in Neural Programs [69.18742158883869]
We study the problem of learning efficient algorithms that strongly generalize in the framework of neural program induction.
By carefully designing the input / output interfaces of the neural model and through imitation, we are able to learn models that produce correct results for arbitrary input sizes.
arXiv Detail & Related papers (2020-07-07T17:03:02Z) - Evaluating Logical Generalization in Graph Neural Networks [59.70452462833374]
We study the task of logical generalization using graph neural networks (GNNs)
Our benchmark suite, GraphLog, requires that learning algorithms perform rule induction in different synthetic logics.
We find that the ability for models to generalize and adapt is strongly determined by the diversity of the logical rules they encounter during training.
arXiv Detail & Related papers (2020-03-14T05:45:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.