CATBench: A Compiler Autotuning Benchmarking Suite for Black-box Optimization
- URL: http://arxiv.org/abs/2406.17811v1
- Date: Mon, 24 Jun 2024 20:15:04 GMT
- Title: CATBench: A Compiler Autotuning Benchmarking Suite for Black-box Optimization
- Authors: Jacob O. Tørring, Carl Hvarfner, Luigi Nardi, Magnus Själander,
- Abstract summary: We present CATBench, a comprehensive benchmarking suite that captures the complexities of compiler autotuning.
The benchmarks in CATBench span a range of machine learning-oriented computations, from tensor algebra to image processing and clustering.
We validate CATBench on several state-of-the-art algorithms, revealing their strengths and weaknesses.
- Score: 5.909352339240516
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian optimization is a powerful method for automating tuning of compilers. The complex landscape of autotuning provides a myriad of rarely considered structural challenges for black-box optimizers, and the lack of standardized benchmarks has limited the study of Bayesian optimization within the domain. To address this, we present CATBench, a comprehensive benchmarking suite that captures the complexities of compiler autotuning, ranging from discrete, conditional, and permutation parameter types to known and unknown binary constraints, as well as both multi-fidelity and multi-objective evaluations. The benchmarks in CATBench span a range of machine learning-oriented computations, from tensor algebra to image processing and clustering, and uses state-of-the-art compilers, such as TACO and RISE/ELEVATE. CATBench offers a unified interface for evaluating Bayesian optimization algorithms, promoting reproducibility and innovation through an easy-to-use, fully containerized setup of both surrogate and real-world compiler optimization tasks. We validate CATBench on several state-of-the-art algorithms, revealing their strengths and weaknesses and demonstrating the suite's potential for advancing both Bayesian optimization and compiler autotuning research.
Related papers
- GloptiNets: Scalable Non-Convex Optimization with Certificates [61.50835040805378]
We present a novel approach to non-cube optimization with certificates, which handles smooth functions on the hypercube or on the torus.
By exploiting the regularity of the target function intrinsic in the decay of its spectrum, we allow at the same time to obtain precise certificates and leverage the advanced and powerful neural networks.
arXiv Detail & Related papers (2023-06-26T09:42:59Z) - Performance Embeddings: A Similarity-based Approach to Automatic
Performance Optimization [71.69092462147292]
Performance embeddings enable knowledge transfer of performance tuning between applications.
We demonstrate this transfer tuning approach on case studies in deep neural networks, dense and sparse linear algebra compositions, and numerical weather prediction stencils.
arXiv Detail & Related papers (2023-03-14T15:51:35Z) - Quantum-Inspired Optimization over Permutation Groups [0.2294014185517203]
Quantum-inspired optimization (QIO) algorithms are computational techniques that emulate certain quantum mechanical effects on a classical hardware.
We develop an algorithmic framework, called Perm-QIO, to tailor QIO tools to solve an arbitrary optimization problem.
arXiv Detail & Related papers (2022-12-06T00:02:39Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - Improving Nevergrad's Algorithm Selection Wizard NGOpt through Automated
Algorithm Configuration [2.649483400176735]
State-of-the-art algorithm selection wizards are complex and difficult to improve.
We propose the use of automated configuration methods for improving their performance.
arXiv Detail & Related papers (2022-09-09T17:28:10Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - Black-Box Optimization Revisited: Improving Algorithm Selection Wizards
through Massive Benchmarking [8.874754363200614]
Existing studies in black-box optimization for machine learning suffer from low generalizability.
We propose a benchmark suite, OptimSuite, which covers a broad range of black-box optimization problems.
ABBO achieves competitive performance on all benchmark suites.
arXiv Detail & Related papers (2020-10-08T14:17:30Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Black-box Mixed-Variable Optimisation using a Surrogate Model that
Satisfies Integer Constraints [9.655888042539495]
Mixed-Variable ReLU-based Surrogate Modelling (MVRSM) is a surrogate-based algorithm that uses a linear combination of rectified linear units.
This method outperforms the state of the art on several synthetic benchmarks with up to 238 continuous and integer variables.
arXiv Detail & Related papers (2020-06-08T12:27:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.