Learning to Predict Program Execution by Modeling Dynamic Dependency on Code Graphs
- URL: http://arxiv.org/abs/2408.02816v2
- Date: Fri, 9 Aug 2024 14:48:04 GMT
- Title: Learning to Predict Program Execution by Modeling Dynamic Dependency on Code Graphs
- Authors: Cuong Chi Le, Hoang Nhat Phan, Huy Nhat Phan, Tien N. Nguyen, Nghi D. Q. Bui,
- Abstract summary: This paper introduces a novel machine learning-based framework called CodeFlow to predict code coverage and detect runtime errors.
CodeFlow represents all possible execution paths and the relationships between different statements.
It learns dynamic dependencies through execution traces, which reflect the impacts among statements during execution.
- Score: 11.347234752942684
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Predicting program behavior without execution is a crucial and challenging task in software engineering. Traditional models often struggle to capture the dynamic dependencies and interactions within code. This paper introduces a novel machine learning-based framework called CodeFlow, designed to predict code coverage and detect runtime errors through Dynamic Dependencies Learning. By utilizing control flow graphs (CFGs), CodeFlow represents all possible execution paths and the relationships between different statements, providing a comprehensive understanding of program behavior. CodeFlow constructs CFGs to depict execution paths and learns vector representations for CFG nodes, capturing static control-flow dependencies. Additionally, it learns dynamic dependencies through execution traces, which reflect the impacts among statements during execution. This approach enables accurate prediction of code coverage and effective identification of runtime errors. Empirical evaluations demonstrate significant improvements in code coverage prediction accuracy and effective localization of runtime errors, outperforming existing models.
Related papers
- VISUALCODER: Guiding Large Language Models in Code Execution with Fine-grained Multimodal Chain-of-Thought Reasoning [10.70881967278009]
We introduce Visual Coder, a simple yet effective approach that enhances code reasoning by integrating multimodal Chain-of-Thought (CoT) reasoning with a visual Control Flow Graph (CFG)
By aligning code snippets with their corresponding CFGs, Visual Coder provides deeper insights into execution flow, enabling more accurate predictions of code behavior.
Our experiments demonstrate that augmenting LLMs with visual CFGs significantly outperforms text-based CFG descriptions in code reasoning tasks.
arXiv Detail & Related papers (2024-10-30T19:07:01Z) - Towards Safe Automated Refactoring of Imperative Deep Learning Programs
to Graph Execution [4.786072763033669]
More natural, less error-prone imperative DL frameworks encouraging eager execution have emerged at the expense of run-time performance.
We present our ongoing work on an automated approach that assists developers in specifying whether and how their otherwise imperative DL code could be reliably and efficiently executed as graphs.
The approach is being implemented as a PyDev Eclipse plug-in and uses the WALA Ariadne analysis framework.
arXiv Detail & Related papers (2023-08-22T20:50:19Z) - TRACED: Execution-aware Pre-training for Source Code [24.101763959136058]
We introduce TRACED, an execution-aware pre-training strategy for source code.
Our goal is to teach code models the complicated execution logic during the pre-training, enabling the model to statically estimate the dynamic code properties.
TRACED relatively improves the statically pre-trained code models by 12.4% for complete execution path prediction and by 25.2% for runtime variable value predictions.
arXiv Detail & Related papers (2023-06-13T01:30:14Z) - CONCORD: Clone-aware Contrastive Learning for Source Code [64.51161487524436]
Self-supervised pre-training has gained traction for learning generic code representations valuable for many downstream SE tasks.
We argue that it is also essential to factor in how developers code day-to-day for general-purpose representation learning.
In particular, we propose CONCORD, a self-supervised, contrastive learning strategy to place benign clones closer in the representation space while moving deviants further apart.
arXiv Detail & Related papers (2023-06-05T20:39:08Z) - A Static Evaluation of Code Completion by Large Language Models [65.18008807383816]
Execution-based benchmarks have been proposed to evaluate functional correctness of model-generated code on simple programming problems.
static analysis tools such as linters, which can detect errors without running the program, haven't been well explored for evaluating code generation models.
We propose a static evaluation framework to quantify static errors in Python code completions, by leveraging Abstract Syntax Trees.
arXiv Detail & Related papers (2023-06-05T19:23:34Z) - A Unified Active Learning Framework for Annotating Graph Data with
Application to Software Source Code Performance Prediction [4.572330678291241]
We develop a unified active learning framework specializing in software performance prediction.
We investigate the impact of using different levels of information for active and passive learning.
Our approach aims to improve the investment in AI models for different software performance predictions.
arXiv Detail & Related papers (2023-04-06T14:00:48Z) - ReACC: A Retrieval-Augmented Code Completion Framework [53.49707123661763]
We propose a retrieval-augmented code completion framework, leveraging both lexical copying and referring to code with similar semantics by retrieval.
We evaluate our approach in the code completion task in Python and Java programming languages, achieving a state-of-the-art performance on CodeXGLUE benchmark.
arXiv Detail & Related papers (2022-03-15T08:25:08Z) - Software Vulnerability Detection via Deep Learning over Disaggregated
Code Graph Representation [57.92972327649165]
This work explores a deep learning approach to automatically learn the insecure patterns from code corpora.
Because code naturally admits graph structures with parsing, we develop a novel graph neural network (GNN) to exploit both the semantic context and structural regularity of a program.
arXiv Detail & Related papers (2021-09-07T21:24:36Z) - Learning to Extend Program Graphs to Work-in-Progress Code [31.235862838381966]
We extend the notion of program graphs to work-in-progress code by learning to predict edge relations between tokens.
We consider the tasks of code completion and localizing and repairing variable misuse in a work-in-process scenario.
We demonstrate that training relation-aware models with fine-tuned edges consistently leads to improved performance on both tasks.
arXiv Detail & Related papers (2021-05-28T18:12:22Z) - Learning to Execute Programs with Instruction Pointer Attention Graph
Neural Networks [55.98291376393561]
Graph neural networks (GNNs) have emerged as a powerful tool for learning software engineering tasks.
Recurrent neural networks (RNNs) are well-suited to long sequential chains of reasoning, but they do not naturally incorporate program structure.
We introduce a novel GNN architecture, the Instruction Pointer Attention Graph Neural Networks (IPA-GNN), which improves systematic generalization on the task of learning to execute programs.
arXiv Detail & Related papers (2020-10-23T19:12:30Z) - Online Reinforcement Learning Control by Direct Heuristic Dynamic
Programming: from Time-Driven to Event-Driven [80.94390916562179]
Time-driven learning refers to the machine learning method that updates parameters in a prediction model continuously as new data arrives.
It is desirable to prevent the time-driven dHDP from updating due to insignificant system event such as noise.
We show how the event-driven dHDP algorithm works in comparison to the original time-driven dHDP.
arXiv Detail & Related papers (2020-06-16T05:51:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.