IOHanalyzer: Detailed Performance Analyses for Iterative Optimization
Heuristics
- URL: http://arxiv.org/abs/2007.03953v4
- Date: Mon, 3 Jan 2022 21:49:45 GMT
- Title: IOHanalyzer: Detailed Performance Analyses for Iterative Optimization
Heuristics
- Authors: Hao Wang, Diederick Vermetten, Furong Ye, Carola Doerr, Thomas B\"ack
- Abstract summary: IOHanalyzer is a new user-friendly tool for the analysis, comparison, and visualization of performance data of IOHs.
IOHanalyzer provides detailed statistics about fixed-target running times and about fixed-budget performance of the benchmarked algorithms.
IOHanalyzer can directly process performance data from the main benchmarking platforms.
- Score: 3.967483941966979
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Benchmarking and performance analysis play an important role in understanding
the behaviour of iterative optimization heuristics (IOHs) such as local search
algorithms, genetic and evolutionary algorithms, Bayesian optimization
algorithms, etc. This task, however, involves manual setup, execution, and
analysis of the experiment on an individual basis, which is laborious and can
be mitigated by a generic and well-designed platform. For this purpose, we
propose IOHanalyzer, a new user-friendly tool for the analysis, comparison, and
visualization of performance data of IOHs.
Implemented in R and C++, IOHanalyzer is fully open source. It is available
on CRAN and GitHub. IOHanalyzer provides detailed statistics about fixed-target
running times and about fixed-budget performance of the benchmarked algorithms
with a real-valued codomain, single-objective optimization tasks. Performance
aggregation over several benchmark problems is possible, for example in the
form of empirical cumulative distribution functions. Key advantages of
IOHanalyzer over other performance analysis packages are its highly interactive
design, which allows users to specify the performance measures, ranges, and
granularity that are most useful for their experiments, and the possibility to
analyze not only performance traces, but also the evolution of dynamic state
parameters.
IOHanalyzer can directly process performance data from the main benchmarking
platforms, including the COCO platform, Nevergrad, the SOS platform, and
IOHexperimenter. An R programming interface is provided for users preferring to
have a finer control over the implemented functionalities.
Related papers
- Beyond Single-Model Views for Deep Learning: Optimization versus
Generalizability of Stochastic Optimization Algorithms [13.134564730161983]
This paper adopts a novel approach to deep learning optimization, focusing on gradient descent (SGD) and its variants.
We show that SGD and its variants demonstrate performance on par with flat-minimas like SAM, albeit with half the gradient evaluations.
Our study uncovers several key findings regarding the relationship between training loss and hold-out accuracy, as well as the comparable performance of SGD and noise-enabled variants.
arXiv Detail & Related papers (2024-03-01T14:55:22Z) - Performance Embeddings: A Similarity-based Approach to Automatic
Performance Optimization [71.69092462147292]
Performance embeddings enable knowledge transfer of performance tuning between applications.
We demonstrate this transfer tuning approach on case studies in deep neural networks, dense and sparse linear algebra compositions, and numerical weather prediction stencils.
arXiv Detail & Related papers (2023-03-14T15:51:35Z) - OPTION: OPTImization Algorithm Benchmarking ONtology [4.060078409841919]
OPTION (OPTImization algorithm benchmarking ONtology) is a semantically rich, machine-readable data model for benchmarking platforms.
Our ontology provides the vocabulary needed for semantic annotation of the core entities involved in the benchmarking process.
It also provides means for automatic data integration, improved interoperability, and powerful querying capabilities.
arXiv Detail & Related papers (2022-11-21T10:34:43Z) - Perona: Robust Infrastructure Fingerprinting for Resource-Efficient Big
Data Analytics [0.06524460254566904]
We present Perona, a novel approach to robust infrastructure fingerprinting for usage in big data analytics.
Perona employs common sets and configurations of benchmarking tools for target resources, so that resulting benchmark metrics are directly comparable and ranking is enabled.
We evaluate our approach both on data gathered from our own experiments as well as within related works for resource configuration optimization.
arXiv Detail & Related papers (2022-11-15T15:48:09Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Evolving Pareto-Optimal Actor-Critic Algorithms for Generalizability and
Stability [67.8426046908398]
Generalizability and stability are two key objectives for operating reinforcement learning (RL) agents in the real world.
This paper presents MetaPG, an evolutionary method for automated design of actor-critic loss functions.
arXiv Detail & Related papers (2022-04-08T20:46:16Z) - IOHexperimenter: Benchmarking Platform for Iterative Optimization
Heuristics [3.6980928405935813]
IOHexperimenter aims at providing an easy-to-use and highly customizable toolbox for benchmarking iterative optimizations.
IOHexperimenter can be used as a stand-alone tool or as part of a benchmarking pipeline that uses other components of IOHprofiler such as IOHanalyzer.
arXiv Detail & Related papers (2021-11-07T13:11:37Z) - Comparative Code Structure Analysis using Deep Learning for Performance
Prediction [18.226950022938954]
This paper aims to assess the feasibility of using purely static information (e.g., abstract syntax tree or AST) of applications to predict performance change based on the change in code structure.
Our evaluations of several deep embedding learning methods demonstrate that tree-based Long Short-Term Memory (LSTM) models can leverage the hierarchical structure of source-code to discover latent representations and achieve up to 84% (individual problem) and 73% (combined dataset with multiple of problems) accuracy in predicting the change in performance.
arXiv Detail & Related papers (2021-02-12T16:59:12Z) - Towards More Fine-grained and Reliable NLP Performance Prediction [85.78131503006193]
We make two contributions to improving performance prediction for NLP tasks.
First, we examine performance predictors for holistic measures of accuracy like F1 or BLEU.
Second, we propose methods to understand the reliability of a performance prediction model from two angles: confidence intervals and calibration.
arXiv Detail & Related papers (2021-02-10T15:23:20Z) - Shared Space Transfer Learning for analyzing multi-site fMRI data [83.41324371491774]
Multi-voxel pattern analysis (MVPA) learns predictive models from task-based functional magnetic resonance imaging (fMRI) data.
MVPA works best with a well-designed feature set and an adequate sample size.
Most fMRI datasets are noisy, high-dimensional, expensive to collect, and with small sample sizes.
This paper proposes the Shared Space Transfer Learning (SSTL) as a novel transfer learning approach.
arXiv Detail & Related papers (2020-10-24T08:50:26Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.