Autotuning Search Space for Loop Transformations
- URL: http://arxiv.org/abs/2010.06521v1
- Date: Tue, 13 Oct 2020 16:26:57 GMT
- Title: Autotuning Search Space for Loop Transformations
- Authors: Michael Kruse, Hal Finkel, Xingfu Wu
- Abstract summary: We propose a loop transformation search space that takes the form of a tree.
We implemented a simple autotuner exploring the search space and applied it to a selected set of PolyBench kernels.
- Score: 0.03683202928838612
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the challenges for optimizing compilers is to predict whether applying
an optimization will improve its execution speed. Programmers may override the
compiler's profitability heuristic using optimization directives such as
pragmas in the source code. Machine learning in the form of autotuning can
assist users in finding the best optimizations for each platform.
In this paper we propose a loop transformation search space that takes the
form of a tree, in contrast to previous approaches that usually use vector
spaces to represent loop optimization configurations. We implemented a simple
autotuner exploring the search space and applied it to a selected set of
PolyBench kernels. While the autotuner is capable of representing every
possible sequence of loop transformations and their relations, the results
motivate the use of better search strategies such as Monte Carlo tree search to
find sophisticated loop transformations such as multilevel tiling.
Related papers
- CompilerDream: Learning a Compiler World Model for General Code Optimization [58.87557583347996]
We introduce CompilerDream, a model-based reinforcement learning approach to general code optimization.
It comprises a compiler world model that accurately simulates the intrinsic properties of optimization passes and an agent trained on this model to produce effective optimization strategies.
It excels across diverse datasets, surpassing LLVM's built-in optimizations and other state-of-the-art methods in both settings of value prediction and end-to-end code optimization.
arXiv Detail & Related papers (2024-04-24T09:20:33Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Performance Embeddings: A Similarity-based Approach to Automatic
Performance Optimization [71.69092462147292]
Performance embeddings enable knowledge transfer of performance tuning between applications.
We demonstrate this transfer tuning approach on case studies in deep neural networks, dense and sparse linear algebra compositions, and numerical weather prediction stencils.
arXiv Detail & Related papers (2023-03-14T15:51:35Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - Searching for More Efficient Dynamic Programs [61.79535031840558]
We describe a set of program transformations, a simple metric for assessing the efficiency of a transformed program, and a search procedure to improve this metric.
We show that in practice, automated search can find substantial improvements to the initial program.
arXiv Detail & Related papers (2021-09-14T20:52:55Z) - Autotuning PolyBench Benchmarks with LLVM Clang/Polly Loop Optimization
Pragmas Using Bayesian Optimization (extended version) [0.8070511670572696]
We use LLVM Clang/Polly loop optimization pragmas to optimize PolyBench benchmarks.
We then use the autotuning framework to optimize the pragma parameters to improve their performance.
We present loop autotuning without a user's knowledge using a simple mctree autotuning framework to further improve the performance of the Floyd-Warshall benchmark.
arXiv Detail & Related papers (2021-04-27T14:46:57Z) - Learning to Make Compiler Optimizations More Effective [11.125012960514471]
LoopLearner predicts which way of writing a loop will lead to efficient compiled code.
We evaluate LoopLearner with 1,895 loops from various performance-relevant benchmarks.
arXiv Detail & Related papers (2021-02-24T10:42:56Z) - Autotuning PolyBench Benchmarks with LLVM Clang/Polly Loop Optimization
Pragmas Using Bayesian Optimization [0.6583716093321499]
An autotuning is an approach that explores a search space of possible implementations/configurations of a kernel or an application.
We develop an autotuning framework that leverages Bayesian optimization to explore the parameter space search.
arXiv Detail & Related papers (2020-10-15T22:09:42Z) - Static Neural Compiler Optimization via Deep Reinforcement Learning [1.458855293397494]
In this paper, we employ a deep reinforcement learning approach to the phase-ordering problem.
Provided with sub-sequences constituting LLVM's O3 sequence, our agent learns to outperform the O3 sequence on the set of source codes used for training.
We believe that the models trained using our approach can be integrated into modern compilers as neural optimization agents.
arXiv Detail & Related papers (2020-08-20T13:16:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.