Neuro-Symbolic Execution of Generic Source Code
- URL: http://arxiv.org/abs/2304.00989v2
- Date: Fri, 4 Aug 2023 18:15:05 GMT
- Title: Neuro-Symbolic Execution of Generic Source Code
- Authors: Yaojie Hu, Jin Tian
- Abstract summary: We introduce Neural Interpretation (NI), the first neural model for the execution of generic source code that allows missing definitions.
NI is a novel neural model of computers with a compiler architecture that can assemble neural layers "programmed" by source code.
- Score: 6.47243430672461
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Can a Python program be executed statement-by-statement by neural networks
composed according to the source code? We formulate the Neuro-Symbolic
Execution Problem and introduce Neural Interpretation (NI), the first neural
model for the execution of generic source code that allows missing definitions.
NI preserves source code structure, where every variable has a vector encoding,
and every function executes a neural network. NI is a novel neural model of
computers with a compiler architecture that can assemble neural layers
"programmed" by source code. NI is the first neural model capable of executing
Py150 dataset programs, including library functions without concrete inputs,
and it can be trained with flexible code understanding objectives. We
demonstrate white-box execution without concrete inputs for variable misuse
localization and repair.
Related papers
- Idioms: Neural Decompilation With Joint Code and Type Prediction [7.421408987075001]
We introduce a new training process to finetune any LLM into a neural decompiler capable of generating the appropriate user-defined types alongside the decompilation.
Motivated by the intuition that different parts of data structures can be operated upon by different parts of the program, we show that interprocedural context can help improve neural decompilers' ability to handle user-defined types.
arXiv Detail & Related papers (2025-02-06T22:13:40Z) - A Library for Learning Neural Operators [77.16483961863808]
We present NeuralOperator, an open-source Python library for operator learning.
Neural operators generalize neural networks to maps between function spaces instead of finite-dimensional Euclidean spaces.
Built on top of PyTorch, NeuralOperator provides all the tools for training and deploying neural operator models.
arXiv Detail & Related papers (2024-12-13T18:49:37Z) - Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - Codebook Features: Sparse and Discrete Interpretability for Neural
Networks [43.06828312515959]
We explore whether we can train neural networks to have hidden states that are sparse, discrete, and more interpretable.
Codebook features are produced by finetuning neural networks with vector quantization bottlenecks at each layer.
We find that neural networks can operate under this extreme bottleneck with only modest degradation in performance.
arXiv Detail & Related papers (2023-10-26T08:28:48Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Polynomial Neural Fields for Subband Decomposition and Manipulation [78.2401411189246]
We propose a new class of neural fields called neural fields (PNFs)
The key advantage of a PNF is that it can represent a signal as a composition of manipulable and interpretable components without losing the merits of neural fields.
We empirically demonstrate that Fourier PNFs enable signal manipulation applications such as texture transfer and scale-space.
arXiv Detail & Related papers (2023-02-09T18:59:04Z) - Neurocoder: Learning General-Purpose Computation Using Stored Neural
Programs [64.56890245622822]
Neurocoder is an entirely new class of general-purpose conditional computational machines.
It "codes" itself in a data-responsive way by composing relevant programs from a set of shareable, modular programs.
We show new capacity to learn modular programs, handle severe pattern shifts and remember old programs as new ones are learnt.
arXiv Detail & Related papers (2020-09-24T01:39:16Z) - Extending Answer Set Programs with Neural Networks [2.512827436728378]
We propose NeurASP -- a simple extension of answer set programs by embracing neural networks.
We show that NeurASP can not only improve the perception accuracy of a pre-trained neural network, but also help to train a neural network better by giving restrictions through logic rules.
arXiv Detail & Related papers (2020-09-22T00:52:30Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.