IOHexperimenter: Benchmarking Platform for Iterative Optimization
Heuristics
- URL: http://arxiv.org/abs/2111.04077v2
- Date: Sun, 17 Apr 2022 20:11:17 GMT
- Title: IOHexperimenter: Benchmarking Platform for Iterative Optimization
Heuristics
- Authors: Jacob de Nobel, Furong Ye, Diederick Vermetten, Hao Wang, Carola
Doerr, Thomas B\"ack
- Abstract summary: IOHexperimenter aims at providing an easy-to-use and highly customizable toolbox for benchmarking iterative optimizations.
IOHexperimenter can be used as a stand-alone tool or as part of a benchmarking pipeline that uses other components of IOHprofiler such as IOHanalyzer.
- Score: 3.6980928405935813
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present IOHexperimenter, the experimentation module of the IOHprofiler
project, which aims at providing an easy-to-use and highly customizable toolbox
for benchmarking iterative optimization heuristics such as local search,
evolutionary and genetic algorithms, Bayesian optimization techniques, etc.
IOHexperimenter can be used as a stand-alone tool or as part of a benchmarking
pipeline that uses other components of IOHprofiler such as IOHanalyzer, the
module for interactive performance analysis and visualization. IOHexperimenter
provides an efficient interface between optimization problems and their solvers
while allowing for granular logging of the optimization process. These logs are
fully compatible with existing tools for interactive data analysis, which
significantly speeds up the deployment of a benchmarking pipeline. The main
components of IOHexperimenter are the environment to build customized problem
suites and the various logging options that allow users to steer the
granularity of the data records.
Related papers
- From Exploration to Mastery: Enabling LLMs to Master Tools via Self-Driven Interactions [60.733557487886635]
This paper focuses on bridging the comprehension gap between Large Language Models and external tools.
We propose a novel framework, DRAFT, aimed at Dynamically refining tool documentation.
Extensive experiments on multiple datasets demonstrate that DRAFT's iterative, feedback-based refinement significantly ameliorates documentation quality.
arXiv Detail & Related papers (2024-10-10T17:58:44Z) - Self-Augmented Preference Optimization: Off-Policy Paradigms for Language Model Alignment [104.18002641195442]
We introduce Self-Augmented Preference Optimization (SAPO), an effective and scalable training paradigm that does not require existing paired data.
Building on the self-play concept, which autonomously generates negative responses, we further incorporate an off-policy learning pipeline to enhance data exploration and exploitation.
arXiv Detail & Related papers (2024-05-31T14:21:04Z) - Implicitly Guided Design with PropEn: Match your Data to Follow the Gradient [52.2669490431145]
PropEn is inspired by'matching', which enables implicit guidance without training a discriminator.
We show that training with a matched dataset approximates the gradient of the property of interest while remaining within the data distribution.
arXiv Detail & Related papers (2024-05-28T11:30:19Z) - Enhanced Bayesian Optimization via Preferential Modeling of Abstract
Properties [49.351577714596544]
We propose a human-AI collaborative Bayesian framework to incorporate expert preferences about unmeasured abstract properties into surrogate modeling.
We provide an efficient strategy that can also handle any incorrect/misleading expert bias in preferential judgments.
arXiv Detail & Related papers (2024-02-27T09:23:13Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - OPTION: OPTImization Algorithm Benchmarking ONtology [4.060078409841919]
OPTION (OPTImization algorithm benchmarking ONtology) is a semantically rich, machine-readable data model for benchmarking platforms.
Our ontology provides the vocabulary needed for semantic annotation of the core entities involved in the benchmarking process.
It also provides means for automatic data integration, improved interoperability, and powerful querying capabilities.
arXiv Detail & Related papers (2022-11-21T10:34:43Z) - Deep Visual Geo-localization Benchmark [42.675402470265674]
We propose a new open-source benchmarking framework for Visual Geo-localization (VG)
This framework allows to build, train, and test a wide range of commonly used architectures.
Code and trained models are available at https://deep-vg-bench.herokuapp.com/.
arXiv Detail & Related papers (2022-04-07T13:47:49Z) - Extensible Logging and Empirical Attainment Function for IOHexperimenter [0.0]
IOHexperimenter provides a large set of synthetic problems, a logging system, and a fast implementation.
We implement a new logger, which aims at computing performance metrics of an algorithm across a benchmark.
We provide some common statistics on the Empirical Attainment Function and its discrete counterpart, the Empirical Attainment Histogram.
arXiv Detail & Related papers (2021-09-28T14:52:52Z) - Automated Evolutionary Approach for the Design of Composite Machine
Learning Pipelines [48.7576911714538]
The proposed approach is aimed to automate the design of composite machine learning pipelines.
It designs the pipelines with a customizable graph-based structure, analyzes the obtained results, and reproduces them.
The software implementation on this approach is presented as an open-source framework.
arXiv Detail & Related papers (2021-06-26T23:19:06Z) - IOHanalyzer: Detailed Performance Analyses for Iterative Optimization
Heuristics [3.967483941966979]
IOHanalyzer is a new user-friendly tool for the analysis, comparison, and visualization of performance data of IOHs.
IOHanalyzer provides detailed statistics about fixed-target running times and about fixed-budget performance of the benchmarked algorithms.
IOHanalyzer can directly process performance data from the main benchmarking platforms.
arXiv Detail & Related papers (2020-07-08T08:20:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.