Abstract: We have developed several autotuning benchmarks in CUDA that take into
account performance-relevant source-code parameters and reach near
peak-performance on various GPU architectures. We have used them during the
development and evaluation of a novel search method for tuning space proposed
in . With our framework Kernel Tuning Toolkit, freely available at Github,
we measured computation times and hardware performance counters on several GPUs
for the complete tuning spaces of five benchmarks. These data, which we provide
here, might benefit research of search algorithms for the tuning spaces of GPU
codes or research of relation between applied code optimization, hardware
performance counters, and GPU kernels' performance.
Moreover, we describe the scripts we used for robust evaluation of our
searcher and comparison to others in detail. In particular, the script that
simulates the tuning, i.e., replaces time-demanding compiling and executing the
tuned kernels with a quick reading of the computation time from our measured
data, makes it possible to inspect the convergence of tuning search over a
large number of experiments. These scripts, freely available with our other
codes, make it easier to experiment with search algorithms and compare them in
a robust way.
During our research, we generated models for predicting values of performance
counters from values of tuning parameters of our benchmarks. Here, we provide
the models themselves and describe the scripts we implemented for their
training. These data might benefit researchers who want to reproduce or build
on our research.