Neural Termination Analysis
- URL: http://arxiv.org/abs/2102.03824v1
- Date: Sun, 7 Feb 2021 15:45:30 GMT
- Title: Neural Termination Analysis
- Authors: Mirco Giacobbe, Daniel Kroening, Julian Parsert
- Abstract summary: We train neural networks to act as ranking functions.
We map program states to values that are bounded from below and decrease as the program runs.
The existence of a valid ranking function proves that the program terminates.
- Score: 9.973499664140158
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a novel approach to the automated termination analysis of
computer programs: we train neural networks to act as ranking functions.
Ranking functions map program states to values that are bounded from below and
decrease as the program runs. The existence of a valid ranking function proves
that the program terminates. While in the past ranking functions were usually
constructed using static analysis, our method learns them from sampled
executions. We train a neural network so that its output decreases along
execution traces as a ranking function would; then, we use formal reasoning to
verify whether it generalises to all possible executions. We present a custom
loss function for learning lexicographic ranking functions and use
satisfiability modulo theories for verification. Thanks to the ability of
neural networks to generalise well, our method succeeds over a wide variety of
programs. This includes programs that use data structures from standard
libraries. We built a prototype analyser for Java bytecode and show the
efficacy of our method over a standard dataset of benchmarks.
Related papers
- ReasonAgain: Using Extractable Symbolic Programs to Evaluate Mathematical Reasoning [54.70811660561151]
Existing math datasets evaluate the reasoning abilities of large language models (LLMs) by either using the final answer or the intermediate reasoning steps derived from static examples.
We seek to use symbolic programs as a means for automated evaluation if a model can consistently produce correct final answers across various inputs to the program.
We observe significant accuracy drops using our proposed evaluation compared with original static examples, suggesting the fragility of math reasoning in state-of-the-art LLMs.
arXiv Detail & Related papers (2024-10-24T18:02:37Z) - Neural Algorithmic Reasoning with Causal Regularisation [18.299363749150093]
We make an important observation: there are many different inputs for which an algorithm will perform certain intermediate computations identically.
This insight allows us to develop data augmentation procedures that, given an algorithm's intermediate trajectory, produce inputs for which the target algorithm would have exactly the same next trajectory step.
We prove that the resulting method, which we call Hint-ReLIC, improves the OOD generalisation capabilities of the reasoner.
arXiv Detail & Related papers (2023-02-20T19:41:15Z) - Using Graph Neural Networks for Program Termination [0.0]
We move away from formal methods to embrace the nature of machine learning models.
We take advantage of the graph representation of programs by employing Graph Neural Networks.
We adapt the notions of attention and semantic segmentation, previously used for other application domains, to programs.
arXiv Detail & Related papers (2022-07-28T12:16:37Z) - Benchmarking Node Outlier Detection on Graphs [90.29966986023403]
Graph outlier detection is an emerging but crucial machine learning task with numerous applications.
We present the first comprehensive unsupervised node outlier detection benchmark for graphs called UNOD.
arXiv Detail & Related papers (2022-06-21T01:46:38Z) - Refining neural network predictions using background knowledge [68.35246878394702]
We show we can use logical background knowledge in learning system to compensate for a lack of labeled training data.
We introduce differentiable refinement functions that find a corrected prediction close to the original prediction.
This algorithm finds optimal refinements on complex SAT formulas in significantly fewer iterations and frequently finds solutions where gradient descent can not.
arXiv Detail & Related papers (2022-06-10T10:17:59Z) - Towards Neural Functional Program Evaluation [0.5586191108738562]
We introduce a new program generation mechanism that allows control over syntactic sugar for semantically equivalent programs.
Experiments reveal that neural functional program evaluation performs surprisingly well, achieving high 90% exact program match scores.
arXiv Detail & Related papers (2021-12-09T00:20:29Z) - Learnable Graph Matching: Incorporating Graph Partitioning with Deep
Feature Learning for Multiple Object Tracking [58.30147362745852]
Data association across frames is at the core of Multiple Object Tracking (MOT) task.
Existing methods mostly ignore the context information among tracklets and intra-frame detections.
We propose a novel learnable graph matching method to address these issues.
arXiv Detail & Related papers (2021-03-30T08:58:45Z) - Replacing Rewards with Examples: Example-Based Policy Search via
Recursive Classification [133.20816939521941]
In the standard Markov decision process formalism, users specify tasks by writing down a reward function.
In many scenarios, the user is unable to describe the task in words or numbers, but can readily provide examples of what the world would look like if the task were solved.
Motivated by this observation, we derive a control algorithm that aims to visit states that have a high probability of leading to successful outcomes, given only examples of successful outcome states.
arXiv Detail & Related papers (2021-03-23T16:19:55Z) - GRAPHSPY: Fused Program Semantic-Level Embedding via Graph Neural
Networks for Dead Store Detection [4.82596017481926]
We propose a learning-grained approach to identify unnecessary memory operations intelligently with low overhead.
By applying several prevalent graph neural network models to extract program semantics, we present a novel, hybrid program embedding approach.
Results show that our model achieves 90% of accuracy and incurs only around a half of time overhead of the state-of-art tool.
arXiv Detail & Related papers (2020-11-18T19:17:15Z) - Learning to Execute Programs with Instruction Pointer Attention Graph
Neural Networks [55.98291376393561]
Graph neural networks (GNNs) have emerged as a powerful tool for learning software engineering tasks.
Recurrent neural networks (RNNs) are well-suited to long sequential chains of reasoning, but they do not naturally incorporate program structure.
We introduce a novel GNN architecture, the Instruction Pointer Attention Graph Neural Networks (IPA-GNN), which improves systematic generalization on the task of learning to execute programs.
arXiv Detail & Related papers (2020-10-23T19:12:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.