On the Generalizability of Neural Program Models with respect to
Semantic-Preserving Program Transformations
- URL: http://arxiv.org/abs/2008.01566v3
- Date: Thu, 18 Mar 2021 07:35:13 GMT
- Title: On the Generalizability of Neural Program Models with respect to
Semantic-Preserving Program Transformations
- Authors: Md Rafiqul Islam Rabin, Nghi D. Q. Bui, Ke Wang, Yijun Yu, Lingxiao
Jiang, Mohammad Amin Alipour
- Abstract summary: We evaluate the generalizability of neural program models with respect to semantic-preserving transformations.
We use three Java datasets of different sizes and three state-of-the-art neural network models for code.
Our results suggest that neural program models based on data and control dependencies in programs generalize better than neural program models based only on abstract syntax trees.
- Score: 25.96895574298886
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the prevalence of publicly available source code repositories to train
deep neural network models, neural program models can do well in source code
analysis tasks such as predicting method names in given programs that cannot be
easily done by traditional program analysis techniques. Although such neural
program models have been tested on various existing datasets, the extent to
which they generalize to unforeseen source code is largely unknown. Since it is
very challenging to test neural program models on all unforeseen programs, in
this paper, we propose to evaluate the generalizability of neural program
models with respect to semantic-preserving transformations: a generalizable
neural program model should perform equally well on programs that are of the
same semantics but of different lexical appearances and syntactical structures.
We compare the results of various neural program models for the method name
prediction task on programs before and after automated semantic-preserving
transformations. We use three Java datasets of different sizes and three
state-of-the-art neural network models for code, namely code2vec, code2seq, and
GGNN, to build nine such neural program models for evaluation. Our results show
that even with small semantically preserving changes to the programs, these
neural program models often fail to generalize their performance. Our results
also suggest that neural program models based on data and control dependencies
in programs generalize better than neural program models based only on abstract
syntax trees. On the positive side, we observe that as the size of the training
dataset grows and diversifies the generalizability of correct predictions
produced by the neural program models can be improved too. Our results on the
generalizability of neural program models provide insights to measure their
limitations and provide a stepping stone for their improvement.
Related papers
- Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - A Spectral Theory of Neural Prediction and Alignment [8.65717258105897]
We use a recent theoretical framework that relates the generalization error from regression to the spectral properties of the model and the target.
We test a large number of deep neural networks that predict visual cortical activity and show that there are multiple types of geometries that result in low neural prediction error as measured via regression.
arXiv Detail & Related papers (2023-09-22T12:24:06Z) - Dependency-based Mixture Language Models [53.152011258252315]
We introduce the Dependency-based Mixture Language Models.
In detail, we first train neural language models with a novel dependency modeling objective.
We then formulate the next-token probability by mixing the previous dependency modeling probability distributions with self-attention.
arXiv Detail & Related papers (2022-03-19T06:28:30Z) - Deep Reinforcement Learning Models Predict Visual Responses in the
Brain: A Preliminary Result [1.0323063834827415]
We use reinforcement learning to train neural network models to play a 3D computer game.
We find that these reinforcement learning models achieve neural response prediction accuracy scores in the early visual areas.
In contrast, the supervised neural network models yield better neural response predictions in the higher visual areas.
arXiv Detail & Related papers (2021-06-18T13:10:06Z) - BF++: a language for general-purpose program synthesis [0.483420384410068]
Most state of the art decision systems based on Reinforcement Learning (RL) are data-driven black-box neural models.
We propose a new programming language, BF++, designed specifically for automatic programming of agents in a Partially Observable Markov Decision Process setting.
arXiv Detail & Related papers (2021-01-23T19:44:44Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Learning to learn generative programs with Memoised Wake-Sleep [52.439550543743536]
We study a class of neuro-symbolic generative models in which neural networks are used both for inference and as priors over symbolic, data-generating programs.
We propose the Memoised Wake-Sleep (MWS) algorithm, which extends Wake Sleep by explicitly storing and reusing the best programs discovered by the inference network throughout training.
arXiv Detail & Related papers (2020-07-06T23:51:03Z) - Evaluation of Generalizability of Neural Program Analyzers under
Semantic-Preserving Transformations [1.3477892615179483]
We evaluate the generalizability of two popular neural program analyzers using seven semantically-equivalent transformations of programs.
Our results caution that in many cases the neural program analyzers fail to generalize well, sometimes to programs with negligible textual differences.
arXiv Detail & Related papers (2020-04-15T19:55:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.