PACE: A Program Analysis Framework for Continuous Performance Prediction
- URL: http://arxiv.org/abs/2312.00918v1
- Date: Fri, 1 Dec 2023 20:43:34 GMT
- Title: PACE: A Program Analysis Framework for Continuous Performance Prediction
- Authors: Chidera Biringa and Gokhan Kul
- Abstract summary: PACE is a program analysis framework that provides continuous feedback on the performance impact of pending code updates.
We design performance microbenchmarks by mapping the execution time of functional test cases given a code update.
Our experiments achieved significant performance in predicting code performance, outperforming current state-of-the-art by 75% on neural-represented code stylometry features.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Software development teams establish elaborate continuous integration
pipelines containing automated test cases to accelerate the development process
of software. Automated tests help to verify the correctness of code
modifications decreasing the response time to changing requirements. However,
when the software teams do not track the performance impact of pending
modifications, they may need to spend considerable time refactoring existing
code. This paper presents PACE, a program analysis framework that provides
continuous feedback on the performance impact of pending code updates. We
design performance microbenchmarks by mapping the execution time of functional
test cases given a code update. We map microbenchmarks to code stylometry
features and feed them to predictors for performance predictions. Our
experiments achieved significant performance in predicting code performance,
outperforming current state-of-the-art by 75% on neural-represented code
stylometry features.
Related papers
- Verificarlo CI: continuous integration for numerical optimization and debugging [0.0]
We introduce Verificarlo CI, a continuous integration workflow for the numerical optimization and debug of a code over the course of its development.
We demonstrate applicability of Verificarlo CI on two test-case applications.
arXiv Detail & Related papers (2024-07-11T08:01:08Z) - NExT: Teaching Large Language Models to Reason about Code Execution [50.93581376646064]
Large language models (LLMs) of code are typically trained on the surface textual form of programs.
We propose NExT, a method to teach LLMs to inspect the execution traces of programs and reason about their run-time behavior.
arXiv Detail & Related papers (2024-04-23T01:46:32Z) - Analyzing the Influence of Processor Speed and Clock Speed on Remaining
Useful Life Estimation of Software Systems [1.104960878651584]
This research extends the analysis to assess how changes in environmental attributes, such as operating system and clock speed, affect RUL estimation in software.
Findings are rigorously validated using real performance data from controlled test beds and compared with predictive model-generated data.
This exploration yields actionable knowledge for software maintenance and optimization strategies.
arXiv Detail & Related papers (2023-09-22T04:46:34Z) - FuzzyFlow: Leveraging Dataflow To Find and Squash Program Optimization
Bugs [92.47146416628965]
FuzzyFlow is a fault localization and test case extraction framework designed to test program optimizations.
We leverage dataflow program representations to capture a fully reproducible system state and area-of-effect for optimizations.
To reduce testing time, we design an algorithm for minimizing test inputs, trading off memory for recomputation.
arXiv Detail & Related papers (2023-06-28T13:00:17Z) - Teaching Large Language Models to Self-Debug [62.424077000154945]
Large language models (LLMs) have achieved impressive performance on code generation.
We propose Self- Debugging, which teaches a large language model to debug its predicted program via few-shot demonstrations.
arXiv Detail & Related papers (2023-04-11T10:43:43Z) - Performance Embeddings: A Similarity-based Approach to Automatic
Performance Optimization [71.69092462147292]
Performance embeddings enable knowledge transfer of performance tuning between applications.
We demonstrate this transfer tuning approach on case studies in deep neural networks, dense and sparse linear algebra compositions, and numerical weather prediction stencils.
arXiv Detail & Related papers (2023-03-14T15:51:35Z) - Learning Performance-Improving Code Edits [107.21538852090208]
We introduce a framework for adapting large language models (LLMs) to high-level program optimization.
First, we curate a dataset of performance-improving edits made by human programmers of over 77,000 competitive C++ programming submission pairs.
For prompting, we propose retrieval-based few-shot prompting and chain-of-thought, and for finetuning, these include performance-conditioned generation and synthetic data augmentation based on self-play.
arXiv Detail & Related papers (2023-02-15T18:59:21Z) - FixEval: Execution-based Evaluation of Program Fixes for Programming
Problems [23.987104440395576]
We introduce FixEval, a benchmark comprising of buggy code submissions to competitive programming problems and their corresponding fixes.
FixEval offers an extensive collection of unit tests to evaluate the correctness of model-generated program fixes.
Our experiments show that match-based metrics do not reflect model-generated program fixes accurately.
arXiv Detail & Related papers (2022-06-15T20:18:43Z) - ReACC: A Retrieval-Augmented Code Completion Framework [53.49707123661763]
We propose a retrieval-augmented code completion framework, leveraging both lexical copying and referring to code with similar semantics by retrieval.
We evaluate our approach in the code completion task in Python and Java programming languages, achieving a state-of-the-art performance on CodeXGLUE benchmark.
arXiv Detail & Related papers (2022-03-15T08:25:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.