OPTION: OPTImization Algorithm Benchmarking ONtology
- URL: http://arxiv.org/abs/2211.11332v1
- Date: Mon, 21 Nov 2022 10:34:43 GMT
- Title: OPTION: OPTImization Algorithm Benchmarking ONtology
- Authors: Ana Kostovska, Diederick Vermetten, Carola Doerr, Saso D\v{z}eroski,
Pan\v{c}e Panov, Tome Eftimov
- Abstract summary: OPTION (OPTImization algorithm benchmarking ONtology) is a semantically rich, machine-readable data model for benchmarking platforms.
Our ontology provides the vocabulary needed for semantic annotation of the core entities involved in the benchmarking process.
It also provides means for automatic data integration, improved interoperability, and powerful querying capabilities.
- Score: 4.060078409841919
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many optimization algorithm benchmarking platforms allow users to share their
experimental data to promote reproducible and reusable research. However,
different platforms use different data models and formats, which drastically
complicates the identification of relevant datasets, their interpretation, and
their interoperability. Therefore, a semantically rich, ontology-based,
machine-readable data model that can be used by different platforms is highly
desirable. In this paper, we report on the development of such an ontology,
which we call OPTION (OPTImization algorithm benchmarking ONtology). Our
ontology provides the vocabulary needed for semantic annotation of the core
entities involved in the benchmarking process, such as algorithms, problems,
and evaluation measures. It also provides means for automatic data integration,
improved interoperability, and powerful querying capabilities, thereby
increasing the value of the benchmarking data. We demonstrate the utility of
OPTION, by annotating and querying a corpus of benchmark performance data from
the BBOB collection of the COCO framework and from the Yet Another Black-Box
Optimization Benchmark (YABBOB) family of the Nevergrad environment. In
addition, we integrate features of the BBOB functional performance landscape
into the OPTION knowledge base using publicly available datasets with
exploratory landscape analysis. Finally, we integrate the OPTION knowledge base
into the IOHprofiler environment and provide users with the ability to perform
meta-analysis of performance data.
Related papers
- Revisiting BPR: A Replicability Study of a Common Recommender System Baseline [78.00363373925758]
We study the features of the BPR model, indicating their impact on its performance, and investigate open-source BPR implementations.
Our analysis reveals inconsistencies between these implementations and the original BPR paper, leading to a significant decrease in performance of up to 50% for specific implementations.
We show that the BPR model can achieve performance levels close to state-of-the-art methods on the top-n recommendation tasks and even outperform them on specific datasets.
arXiv Detail & Related papers (2024-09-21T18:39:53Z) - Self-Augmented Preference Optimization: Off-Policy Paradigms for Language Model Alignment [104.18002641195442]
We introduce Self-Augmented Preference Optimization (SAPO), an effective and scalable training paradigm that does not require existing paired data.
Building on the self-play concept, which autonomously generates negative responses, we further incorporate an off-policy learning pipeline to enhance data exploration and exploitation.
arXiv Detail & Related papers (2024-05-31T14:21:04Z) - Implicitly Guided Design with PropEn: Match your Data to Follow the Gradient [52.2669490431145]
PropEn is inspired by'matching', which enables implicit guidance without training a discriminator.
We show that training with a matched dataset approximates the gradient of the property of interest while remaining within the data distribution.
arXiv Detail & Related papers (2024-05-28T11:30:19Z) - MA-BBOB: Many-Affine Combinations of BBOB Functions for Evaluating
AutoML Approaches in Noiseless Numerical Black-Box Optimization Contexts [0.8258451067861933]
(MA-)BBOB is built on the publicly available IOHprofiler platform.
It provides access to the interactive IOHanalyzer module for performance analysis and visualization, and enables comparisons with the rich and growing data collection available for the (MA-)BBOB functions.
arXiv Detail & Related papers (2023-06-18T19:32:12Z) - DataPerf: Benchmarks for Data-Centric AI Development [81.03754002516862]
DataPerf is a community-led benchmark suite for evaluating ML datasets and data-centric algorithms.
We provide an open, online platform with multiple rounds of challenges to support this iterative development.
The benchmarks, online evaluation platform, and baseline implementations are open source.
arXiv Detail & Related papers (2022-07-20T17:47:54Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - IOHexperimenter: Benchmarking Platform for Iterative Optimization
Heuristics [3.6980928405935813]
IOHexperimenter aims at providing an easy-to-use and highly customizable toolbox for benchmarking iterative optimizations.
IOHexperimenter can be used as a stand-alone tool or as part of a benchmarking pipeline that uses other components of IOHprofiler such as IOHanalyzer.
arXiv Detail & Related papers (2021-11-07T13:11:37Z) - OPTION: OPTImization Algorithm Benchmarking ONtology [4.060078409841919]
OPTION (OPTImization algorithm benchmarking ONtology) is a semantically rich, machine-readable data model for benchmarking algorithms.
Our ontology provides the vocabulary needed for semantic annotation of the core entities involved in the benchmarking process.
It also provides means for automated data integration, improved interoperability, powerful querying capabilities and reasoning.
arXiv Detail & Related papers (2021-04-24T06:11:30Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z) - IOHanalyzer: Detailed Performance Analyses for Iterative Optimization
Heuristics [3.967483941966979]
IOHanalyzer is a new user-friendly tool for the analysis, comparison, and visualization of performance data of IOHs.
IOHanalyzer provides detailed statistics about fixed-target running times and about fixed-budget performance of the benchmarked algorithms.
IOHanalyzer can directly process performance data from the main benchmarking platforms.
arXiv Detail & Related papers (2020-07-08T08:20:19Z) - StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics [4.237343083490243]
In machine learning (ML), ensemble methods such as bagging, boosting, and stacking are widely-established approaches.
StackGenVis is a visual analytics system for stacked generalization.
arXiv Detail & Related papers (2020-05-04T15:43:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.