TEP-GNN: Accurate Execution Time Prediction of Functional Tests using
Graph Neural Networks
- URL: http://arxiv.org/abs/2208.11947v1
- Date: Thu, 25 Aug 2022 09:08:32 GMT
- Title: TEP-GNN: Accurate Execution Time Prediction of Functional Tests using
Graph Neural Networks
- Authors: Hazem Peter Samoaa, Antonio Longa, Mazen Mohamad, Morteza Haghir
Chehreghani and Philipp Leitner
- Abstract summary: We propose a predictive model, dubbed TEP-GNN, which demonstrates that high-accuracy performance prediction is possible.
TEP-GNN uses FA-ASTs, or flow-augmented ASTs, as a graph-based code representation approach.
We evaluate TEP-GNN using four real-life Java open source programs, based on 922 test files mined from the projects' public repositories.
- Score: 5.899031548148629
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predicting the performance of production code prior to actually executing or
benchmarking it is known to be highly challenging. In this paper, we propose a
predictive model, dubbed TEP-GNN, which demonstrates that high-accuracy
performance prediction is possible for the special case of predicting unit test
execution times. TEP-GNN uses FA-ASTs, or flow-augmented ASTs, as a graph-based
code representation approach, and predicts test execution times using a
powerful graph neural network (GNN) deep learning model. We evaluate TEP-GNN
using four real-life Java open source programs, based on 922 test files mined
from the projects' public repositories. We find that our approach achieves a
high Pearson correlation of 0.789, considerable outperforming a baseline deep
learning model. However, we also find that more work is needed for trained
models to generalize to unseen projects. Our work demonstrates that FA-ASTs and
GNNs are a feasible approach for predicting absolute performance values, and
serves as an important intermediary step towards being able to predict the
performance of arbitrary code prior to execution.
Related papers
- FR-NAS: Forward-and-Reverse Graph Predictor for Efficient Neural Architecture Search [10.699485270006601]
We introduce a novel Graph Neural Networks (GNN) predictor for Neural Architecture Search (NAS)
This predictor renders neural architectures into vector representations by combining both the conventional and inverse graph views.
The experimental results showcase a significant improvement in prediction accuracy, with a 3%--16% increase in Kendall-tau correlation.
arXiv Detail & Related papers (2024-04-24T03:22:49Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Uncertainty Quantification over Graph with Conformalized Graph Neural
Networks [52.20904874696597]
Graph Neural Networks (GNNs) are powerful machine learning prediction models on graph-structured data.
GNNs lack rigorous uncertainty estimates, limiting their reliable deployment in settings where the cost of errors is significant.
We propose conformalized GNN (CF-GNN), extending conformal prediction (CP) to graph-based models for guaranteed uncertainty estimates.
arXiv Detail & Related papers (2023-05-23T21:38:23Z) - PerfSAGE: Generalized Inference Performance Predictor for Arbitrary Deep
Learning Models on Edge Devices [8.272409756443539]
This paper describes PerfSAGE, a novel graph neural network (GNN) that predicts inference latency, energy, and memory footprint on an arbitrary DNNlite graph.
Using this dataset, we train PerfSAGE and provide experimental results that demonstrate state-of-the-art prediction accuracy with a Mean Absolute Percentage Error of 5% across all targets and model search spaces.
arXiv Detail & Related papers (2023-01-26T08:59:15Z) - Invertible Neural Networks for Graph Prediction [22.140275054568985]
In this work, we address conditional generation using deep invertible neural networks.
We adopt an end-to-end training approach since our objective is to address prediction and generation in the forward and backward processes at once.
arXiv Detail & Related papers (2022-06-02T17:28:33Z) - High-Level Synthesis Performance Prediction using GNNs: Benchmarking,
Modeling, and Advancing [21.8349113634555]
Agile hardware development requires fast and accurate circuit quality evaluation from early design stages.
We propose a rapid and accurate performance modeling, exploiting the representation power of graph neural networks (GNNs) by representing C/C++ programs as graphs.
Our proposed predictor largely outperforms HLS by up to 40X and excels existing predictors by 2X to 5X in terms of resource usage and timing prediction.
arXiv Detail & Related papers (2022-01-18T09:53:48Z) - An Adaptive Graph Pre-training Framework for Localized Collaborative
Filtering [79.17319280791237]
We propose an adaptive graph pre-training framework for localized collaborative filtering (ADAPT)
ADAPT captures both the common knowledge across different graphs and the uniqueness for each graph.
It does not require transferring user/item embeddings, and is able to capture both the common knowledge across different graphs and the uniqueness for each graph.
arXiv Detail & Related papers (2021-12-14T06:53:13Z) - Towards More Fine-grained and Reliable NLP Performance Prediction [85.78131503006193]
We make two contributions to improving performance prediction for NLP tasks.
First, we examine performance predictors for holistic measures of accuracy like F1 or BLEU.
Second, we propose methods to understand the reliability of a performance prediction model from two angles: confidence intervals and calibration.
arXiv Detail & Related papers (2021-02-10T15:23:20Z) - Combining Label Propagation and Simple Models Out-performs Graph Neural
Networks [52.121819834353865]
We show that for many standard transductive node classification benchmarks, we can exceed or match the performance of state-of-the-art GNNs.
We call this overall procedure Correct and Smooth (C&S)
Our approach exceeds or nearly matches the performance of state-of-the-art GNNs on a wide variety of benchmarks.
arXiv Detail & Related papers (2020-10-27T02:10:52Z) - Learning to Execute Programs with Instruction Pointer Attention Graph
Neural Networks [55.98291376393561]
Graph neural networks (GNNs) have emerged as a powerful tool for learning software engineering tasks.
Recurrent neural networks (RNNs) are well-suited to long sequential chains of reasoning, but they do not naturally incorporate program structure.
We introduce a novel GNN architecture, the Instruction Pointer Attention Graph Neural Networks (IPA-GNN), which improves systematic generalization on the task of learning to execute programs.
arXiv Detail & Related papers (2020-10-23T19:12:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.