A Reinforcement Learning Environment for Polyhedral Optimizations
- URL: http://arxiv.org/abs/2104.13732v2
- Date: Thu, 29 Apr 2021 08:04:04 GMT
- Title: A Reinforcement Learning Environment for Polyhedral Optimizations
- Authors: Alexander Brauckmann, Andr\'es Goens, Jeronimo Castrillon
- Abstract summary: We propose a shape-agnostic formulation for the space of legal transformations in the polyhedral model as a Markov Decision Process (MDP)
Instead of using transformations, the formulation is based on an abstract space of possible schedules.
Our generic MDP formulation enables using reinforcement learning to learn optimization policies over a wide range of loops.
- Score: 68.8204255655161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The polyhedral model allows a structured way of defining semantics-preserving
transformations to improve the performance of a large class of loops. Finding
profitable points in this space is a hard problem which is usually approached
by heuristics that generalize from domain-expert knowledge. Existing problem
formulations in state-of-the-art heuristics depend on the shape of particular
loops, making it hard to leverage generic and more powerful optimization
techniques from the machine learning domain. In this paper, we propose PolyGym,
a shape-agnostic formulation for the space of legal transformations in the
polyhedral model as a Markov Decision Process (MDP). Instead of using
transformations, the formulation is based on an abstract space of possible
schedules. In this formulation, states model partial schedules, which are
constructed by actions that are reusable across different loops. With a simple
heuristic to traverse the space, we demonstrate that our formulation is
powerful enough to match and outperform state-of-the-art heuristics. On the
Polybench benchmark suite, we found transformations that led to a speedup of
3.39x over LLVM O3, which is 1.83x better than the speedup achieved by ISL. Our
generic MDP formulation enables using reinforcement learning to learn
optimization policies over a wide range of loops. This also contributes to the
emerging field of machine learning in compilers, as it exposes a novel problem
formulation that can push the limits of existing methods.
Related papers
- Structure Language Models for Protein Conformation Generation [66.42864253026053]
Traditional physics-based simulation methods often struggle with sampling equilibrium conformations.
Deep generative models have shown promise in generating protein conformations as a more efficient alternative.
We introduce Structure Language Modeling as a novel framework for efficient protein conformation generation.
arXiv Detail & Related papers (2024-10-24T03:38:51Z) - LOOPer: A Learned Automatic Code Optimizer For Polyhedral Compilers [1.7529897611426233]
We introduce LOOPer, the first polyhedral autoscheduler that uses a deep-learning based cost model.
It supports the exploration of a large set of affine transformations, allowing the application of complex sequences of polyhedral transformations.
It also supports the optimization of programs with multiple loop nests and with rectangular and non-rectangular iteration domains.
arXiv Detail & Related papers (2024-03-18T07:22:31Z) - Machine Learning Optimized Orthogonal Basis Piecewise Polynomial Approximation [0.9208007322096533]
Piecewise Polynomials (PPs) are utilized in several engineering disciplines, like trajectory planning, to approximate position profiles given in the form of a set of points.
arXiv Detail & Related papers (2024-03-13T14:34:34Z) - Universal Neural Functionals [67.80283995795985]
A challenging problem in many modern machine learning tasks is to process weight-space features.
Recent works have developed promising weight-space models that are equivariant to the permutation symmetries of simple feedforward networks.
This work proposes an algorithm that automatically constructs permutation equivariant models for any weight space.
arXiv Detail & Related papers (2024-02-07T20:12:27Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - Progress Report: A Deep Learning Guided Exploration of Affine Unimodular
Loop Transformations [1.5699353548228476]
We present a work in progress about a deep learning based approach for automatic code optimization in polyhedral compilers.
The proposed technique explores combinations of affine and non-affine loop transformations to find the sequence of transformations that minimizes the execution time of a given program.
Preliminary results show that the proposed techniques achieve a 2.35x geometric mean speedup over state of the art polyhedral compilers.
arXiv Detail & Related papers (2022-06-08T05:47:42Z) - Differentiable Spline Approximations [48.10988598845873]
Differentiable programming has significantly enhanced the scope of machine learning.
Standard differentiable programming methods (such as autodiff) typically require that the machine learning models be differentiable.
We show that leveraging this redesigned Jacobian in the form of a differentiable "layer" in predictive models leads to improved performance in diverse applications.
arXiv Detail & Related papers (2021-10-04T16:04:46Z) - ResNet-LDDMM: Advancing the LDDMM Framework Using Deep Residual Networks [86.37110868126548]
In this work, we make use of deep residual neural networks to solve the non-stationary ODE (flow equation) based on a Euler's discretization scheme.
We illustrate these ideas on diverse registration problems of 3D shapes under complex topology-preserving transformations.
arXiv Detail & Related papers (2021-02-16T04:07:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.