Pattern-Based Peephole Optimizations with Java JIT Tests
- URL: http://arxiv.org/abs/2403.11283v1
- Date: Sun, 17 Mar 2024 17:41:01 GMT
- Title: Pattern-Based Peephole Optimizations with Java JIT Tests
- Authors: Zhiqiang Zang, Aditya Thimmaiah, Milos Gligoric,
- Abstract summary: JOG enables developers to write a pattern, in Java itself, that specifies desired code transformations.
JOG translates each pattern into C/C++ code that can be integrated as a JIT optimization pass.
JOG can automatically detect possible shadow relation between a pair of optimizations.
- Score: 5.438042555846583
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present JOG, a framework that facilitates developing Java JIT peephole optimizations alongside JIT tests. JOG enables developers to write a pattern, in Java itself, that specifies desired code transformations by writing code before and after the optimization, as well as any necessary preconditions. Such patterns can be written in the same way that tests of the optimization are already written in OpenJDK. JOG translates each pattern into C/C++ code that can be integrated as a JIT optimization pass. JOG also generates Java tests for optimizations from patterns. Furthermore, JOG can automatically detect possible shadow relation between a pair of optimizations where the effect of the shadowed optimization is overridden by another. Our evaluation shows that JOG makes it easier to write readable JIT optimizations alongside tests without decreasing the effectiveness of JIT optimizations. We wrote 162 patterns, including 68 existing optimizations in OpenJDK, 92 new optimizations adapted from LLVM, and two new optimizations that we proposed. We opened eight pull requests (PRs) for OpenJDK, including six for new optimizations, one on removing shadowed optimizations, and one for newly generated JIT tests; seven PRs have already been integrated into the master branch of OpenJDK.
Related papers
- Compiler Optimization Testing Based on Optimization-Guided Equivalence Transformations [3.2987550056134873]
We propose a metamorphic testing approach inspired by compiler optimizations.
Our approach first employs tailored code construction strategies to generate input programs that satisfy optimization conditions.
By comparing the outputs of pre- and post-transformation programs, this approach effectively identifies incorrect optimization bugs.
arXiv Detail & Related papers (2025-04-06T01:37:57Z) - Iterative or Innovative? A Problem-Oriented Perspective for Code Optimization [81.88668100203913]
Large language models (LLMs) have demonstrated strong capabilities in solving a wide range of programming tasks.
In this paper, we explore code optimization with a focus on performance enhancement, specifically aiming to optimize code for minimal execution time.
arXiv Detail & Related papers (2024-06-17T16:10:10Z) - CompilerDream: Learning a Compiler World Model for General Code Optimization [58.87557583347996]
We introduce CompilerDream, a model-based reinforcement learning approach to general code optimization.
It comprises a compiler world model that accurately simulates the intrinsic properties of optimization passes and an agent trained on this model to produce effective optimization strategies.
It excels across diverse datasets, surpassing LLVM's built-in optimizations and other state-of-the-art methods in both settings of value prediction and end-to-end code optimization.
arXiv Detail & Related papers (2024-04-24T09:20:33Z) - Unleashing the Potential of Large Language Models as Prompt Optimizers: Analogical Analysis with Gradient-based Model Optimizers [108.72225067368592]
We propose a novel perspective to investigate the design of large language models (LLMs)-based prompts.
We identify two pivotal factors in model parameter learning: update direction and update method.
We develop a capable Gradient-inspired Prompt-based GPO.
arXiv Detail & Related papers (2024-02-27T15:05:32Z) - Large Language Models for Compiler Optimization [22.52765975286403]
We present a transformer model trained from scratch to optimize LLVM assembly for code size.
We ask the model to predict the instruction counts before and after optimization, and the optimized code itself.
Our approach achieves a 3.0% improvement in reducing instruction counts over the compiler.
arXiv Detail & Related papers (2023-09-11T22:11:46Z) - Large Language Models as Optimizers [106.52386531624532]
We propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as prompts.
In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values.
We demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks.
arXiv Detail & Related papers (2023-09-07T00:07:15Z) - SparseOptimizer: Sparsify Language Models through Moreau-Yosida
Regularization and Accelerate via Compiler Co-design [0.685316573653194]
This paper introduces Sparser, a novel deep learning that exploits Moreau-Yosida regularization to induce sparsity in large language models such as BERT, ALBERT and GPT.
Sparser's plug-and-play functionality eradicates the need for code modifications, making it a universally adaptable tool for a wide array of large language models.
Empirical evaluations on benchmark datasets such as GLUE, RACE, SQuAD1, and SQuAD2 confirm that SBERT and Sparser, when sparsified using Sparser, achieve performance comparable to their dense counterparts
arXiv Detail & Related papers (2023-06-27T17:50:26Z) - ALT: Breaking the Wall between Graph and Operator Level Optimizations
for Deep Learning Compilation [38.8918502461244]
ALT is a compiler that performs joint graph- and operator-level optimizations for deep models.
JOG significantly outperforms state-of-the-art compilers (e.g., Ansor) in terms of both single operator performance and end-to-end inference performance.
arXiv Detail & Related papers (2022-10-22T11:09:36Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - MAGPIE: Machine Automated General Performance Improvement via Evolution
of Software [19.188864062289433]
MAGPIE is a unified software improvement framework.
It provides a common edit sequence based representation that isolates the search process from the specific improvement technique.
arXiv Detail & Related papers (2022-08-04T17:58:43Z) - Learning to Superoptimize Real-world Programs [79.4140991035247]
We propose a framework to learn to superoptimize real-world programs by using neural sequence-to-sequence models.
We introduce the Big Assembly benchmark, a dataset consisting of over 25K real-world functions mined from open-source projects in x86-64 assembly.
arXiv Detail & Related papers (2021-09-28T05:33:21Z) - Learning to Optimize: A Primer and A Benchmark [94.29436694770953]
Learning to optimize (L2O) is an emerging approach that leverages machine learning to develop optimization methods.
This article is poised to be the first comprehensive survey and benchmark of L2O for continuous optimization.
arXiv Detail & Related papers (2021-03-23T20:46:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.