MAGPIE: Machine Automated General Performance Improvement via Evolution
of Software
- URL: http://arxiv.org/abs/2208.02811v1
- Date: Thu, 4 Aug 2022 17:58:43 GMT
- Title: MAGPIE: Machine Automated General Performance Improvement via Evolution
of Software
- Authors: Aymeric Blot and Justyna Petke
- Abstract summary: MAGPIE is a unified software improvement framework.
It provides a common edit sequence based representation that isolates the search process from the specific improvement technique.
- Score: 19.188864062289433
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Performance is one of the most important qualities of software. Several
techniques have thus been proposed to improve it, such as program
transformations, optimisation of software parameters, or compiler flags. Many
automated software improvement approaches use similar search strategies to
explore the space of possible improvements, yet available tooling only focuses
on one approach at a time. This makes comparisons and exploration of
interactions of the various types of improvement impractical.
We propose MAGPIE, a unified software improvement framework. It provides a
common edit sequence based representation that isolates the search process from
the specific improvement technique, enabling a much simplified synergistic
workflow. We provide a case study using a basic local search to compare
compiler optimisation, algorithm configuration, and genetic improvement. We
chose running time as our efficiency measure and evaluated our approach on four
real-world software, written in C, C++, and Java.
Our results show that, used independently, all techniques find significant
running time improvements: up to 25% for compiler optimisation, 97% for
algorithm configuration, and 61% for evolving source code using genetic
improvement. We also show that up to 10% further increase in performance can be
obtained with partial combinations of the variants found by the different
techniques. Furthermore, the common representation also enables simultaneous
exploration of all techniques, providing a competitive alternative to using
each technique individually.
Related papers
- Should AI Optimize Your Code? A Comparative Study of Current Large Language Models Versus Classical Optimizing Compilers [0.0]
Large Language Models (LLMs) raise intriguing questions about the potential for AI-driven approaches to revolutionize code optimization methodologies.
This paper presents a comparative analysis between two state-of-the-art Large Language Models, GPT-4.0 and CodeLlama-70B, and traditional optimizing compilers.
arXiv Detail & Related papers (2024-06-17T23:26:41Z) - Iterative or Innovative? A Problem-Oriented Perspective for Code Optimization [81.88668100203913]
Large language models (LLMs) have demonstrated strong capabilities in solving a wide range of programming tasks.
In this paper, we explore code optimization with a focus on performance enhancement, specifically aiming to optimize code for minimal execution time.
arXiv Detail & Related papers (2024-06-17T16:10:10Z) - Learning Performance-Improving Code Edits [107.21538852090208]
We introduce a framework for adapting large language models (LLMs) to high-level program optimization.
First, we curate a dataset of performance-improving edits made by human programmers of over 77,000 competitive C++ programming submission pairs.
For prompting, we propose retrieval-based few-shot prompting and chain-of-thought, and for finetuning, these include performance-conditioned generation and synthetic data augmentation based on self-play.
arXiv Detail & Related papers (2023-02-15T18:59:21Z) - Massively Parallel Genetic Optimization through Asynchronous Propagation
of Populations [50.591267188664666]
Propulate is an evolutionary optimization algorithm and software package for global optimization.
We provide an MPI-based implementation of our algorithm, which features variants of selection, mutation, crossover, and migration.
We find that Propulate is up to three orders of magnitude faster without sacrificing solution accuracy.
arXiv Detail & Related papers (2023-01-20T18:17:34Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - Learning to Superoptimize Real-world Programs [79.4140991035247]
We propose a framework to learn to superoptimize real-world programs by using neural sequence-to-sequence models.
We introduce the Big Assembly benchmark, a dataset consisting of over 25K real-world functions mined from open-source projects in x86-64 assembly.
arXiv Detail & Related papers (2021-09-28T05:33:21Z) - Particle Swarm Optimization: Fundamental Study and its Application to
Optimization and to Jetty Scheduling Problems [0.0]
The advantages of evolutionary algorithms with respect to traditional methods have been greatly discussed in the literature.
While particle swarms share such advantages, they outperform evolutionary algorithms in that they require lower computational cost and easier implementation.
This paper does not intend to study their tuning, general-purpose settings are taken from previous studies, and virtually the same algorithm is used to optimize a variety of notably different problems.
arXiv Detail & Related papers (2021-01-25T02:06:30Z) - Transferable Graph Optimizers for ML Compilers [18.353830282858834]
We propose an end-to-end, transferable deep reinforcement learning method for computational graph optimization (GO)
GO generates decisions on the entire graph rather than on each individual node autoregressively, drastically speeding up the search compared to prior methods.
GO achieves 21% improvement over human experts and 18% improvement over the prior state of the art with 15x faster convergence.
arXiv Detail & Related papers (2020-10-21T20:28:33Z) - A survey on dragonfly algorithm and its applications in engineering [29.190512851078218]
The dragonfly algorithm was developed in 2016. It is one of the algorithms used by researchers to optimize an extensive series of uses and applications in various areas.
This work addressed the robustness of the method to solve real-world optimization issues, and its deficiency to improve complex optimization problems.
arXiv Detail & Related papers (2020-02-19T20:23:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.