Using Graph Neural Networks for Program Termination
- URL: http://arxiv.org/abs/2207.14648v1
- Date: Thu, 28 Jul 2022 12:16:37 GMT
- Title: Using Graph Neural Networks for Program Termination
- Authors: Yoav Alon and Cristina David
- Abstract summary: We move away from formal methods to embrace the nature of machine learning models.
We take advantage of the graph representation of programs by employing Graph Neural Networks.
We adapt the notions of attention and semantic segmentation, previously used for other application domains, to programs.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Termination analyses investigate the termination behavior of programs,
intending to detect nontermination, which is known to cause a variety of
program bugs (e.g. hanging programs, denial-of-service vulnerabilities). Beyond
formal approaches, various attempts have been made to estimate the termination
behavior of programs using neural networks. However, the majority of these
approaches continue to rely on formal methods to provide strong soundness
guarantees and consequently suffer from similar limitations. In this paper, we
move away from formal methods and embrace the stochastic nature of machine
learning models. Instead of aiming for rigorous guarantees that can be
interpreted by solvers, our objective is to provide an estimation of a
program's termination behavior and of the likely reason for nontermination
(when applicable) that a programmer can use for debugging purposes. Compared to
previous approaches using neural networks for program termination, we also take
advantage of the graph representation of programs by employing Graph Neural
Networks. To further assist programmers in understanding and debugging
nontermination bugs, we adapt the notions of attention and semantic
segmentation, previously used for other application domains, to programs.
Overall, we designed and implemented classifiers for program termination based
on Graph Convolutional Networks and Graph Attention Networks, as well as a
semantic segmentation Graph Neural Network that localizes AST nodes likely to
cause nontermination. We also illustrated how the information provided by
semantic segmentation can be combined with program slicing to further aid
debugging.
Related papers
- Automated Static Warning Identification via Path-based Semantic
Representation [37.70518599085676]
This paper employs deep neural networks' powerful feature extraction and representation abilities to generate code semantics from control flow graph paths for warning identification.
We fine-tune the pre-trained language model to encode the path sequences and capture the semantic representations for model building.
arXiv Detail & Related papers (2023-06-27T15:46:45Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Software Vulnerability Detection via Deep Learning over Disaggregated
Code Graph Representation [57.92972327649165]
This work explores a deep learning approach to automatically learn the insecure patterns from code corpora.
Because code naturally admits graph structures with parsing, we develop a novel graph neural network (GNN) to exploit both the semantic context and structural regularity of a program.
arXiv Detail & Related papers (2021-09-07T21:24:36Z) - Neural Termination Analysis [9.973499664140158]
We train neural networks to act as ranking functions.
We map program states to values that are bounded from below and decrease as the program runs.
The existence of a valid ranking function proves that the program terminates.
arXiv Detail & Related papers (2021-02-07T15:45:30Z) - Learning to Execute Programs with Instruction Pointer Attention Graph
Neural Networks [55.98291376393561]
Graph neural networks (GNNs) have emerged as a powerful tool for learning software engineering tasks.
Recurrent neural networks (RNNs) are well-suited to long sequential chains of reasoning, but they do not naturally incorporate program structure.
We introduce a novel GNN architecture, the Instruction Pointer Attention Graph Neural Networks (IPA-GNN), which improves systematic generalization on the task of learning to execute programs.
arXiv Detail & Related papers (2020-10-23T19:12:30Z) - funcGNN: A Graph Neural Network Approach to Program Similarity [0.90238471756546]
FuncGNN is a graph neural network trained on labeled CFG pairs to predict the GED between unseen program pairs by utilizing an effective embedding vector.
This is the first time graph neural networks have been applied on labeled CFGs for estimating the similarity between high-level language programs.
arXiv Detail & Related papers (2020-07-26T23:16:24Z) - Learning Differentiable Programs with Admissible Neural Heuristics [43.54820901841979]
We study the problem of learning differentiable functions expressed as programs in a domain-specific language.
We frame this optimization problem as a search in a weighted graph whose paths encode top-down derivations of program syntax.
Our key innovation is to view various classes of neural networks as continuous relaxations over the space of programs.
arXiv Detail & Related papers (2020-07-23T16:07:39Z) - Can We Learn Heuristics For Graphical Model Inference Using
Reinforcement Learning? [114.24881214319048]
We show that we can learn programs, i.e., policies, for solving inference in higher order Conditional Random Fields (CRFs) using reinforcement learning.
Our method solves inference tasks efficiently without imposing any constraints on the form of the potentials.
arXiv Detail & Related papers (2020-04-27T19:24:04Z) - Towards an Efficient and General Framework of Robust Training for Graph
Neural Networks [96.93500886136532]
Graph Neural Networks (GNNs) have made significant advances on several fundamental inference tasks.
Despite GNNs' impressive performance, it has been observed that carefully crafted perturbations on graph structures lead them to make wrong predictions.
We propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs.
arXiv Detail & Related papers (2020-02-25T15:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.