GNBG: A Generalized and Configurable Benchmark Generator for Continuous
Numerical Optimization
- URL: http://arxiv.org/abs/2312.07083v1
- Date: Tue, 12 Dec 2023 09:04:34 GMT
- Title: GNBG: A Generalized and Configurable Benchmark Generator for Continuous
Numerical Optimization
- Authors: Danial Yazdani (1), Mohammad Nabi Omidvar (2), Delaram Yazdani (3),
Kalyanmoy Deb (4), and Amir H. Gandomi (1,5) ((1) Faculty of Engineering &
Information Technology, University of Technology Sydney, (2) School of
Computing, University of Leeds, and Leeds University Business School, (3)
Liverpool Logistics, Offshore and Marine (LOOM) Research Institute, Faculty
of Engineering and Technology, School of Engineering, Liverpool John Moores
University, (4) BEACON Center, Michigan State University, (5) University
Research and Innovation Center (EKIK), Obuda University)
- Abstract summary: It is crucial to use a benchmark test suite that encompasses a diverse range of problem instances with various characteristics.
Traditional benchmark suites often consist of numerous fixed test functions, making it challenging to align these with specific research objectives.
This paper introduces the Generalized Numerical Benchmark Generator (GNBG) for single-objective, box-constrained, continuous numerical optimization.
- Score: 5.635586285644365
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: As optimization challenges continue to evolve, so too must our tools and
understanding. To effectively assess, validate, and compare optimization
algorithms, it is crucial to use a benchmark test suite that encompasses a
diverse range of problem instances with various characteristics. Traditional
benchmark suites often consist of numerous fixed test functions, making it
challenging to align these with specific research objectives, such as the
systematic evaluation of algorithms under controllable conditions. This paper
introduces the Generalized Numerical Benchmark Generator (GNBG) for
single-objective, box-constrained, continuous numerical optimization. Unlike
existing approaches that rely on multiple baseline functions and
transformations, GNBG utilizes a single, parametric, and configurable baseline
function. This design allows for control over various problem characteristics.
Researchers using GNBG can generate instances that cover a broad array of
morphological features, from unimodal to highly multimodal functions, various
local optima patterns, and symmetric to highly asymmetric structures. The
generated problems can also vary in separability, variable interaction
structures, dimensionality, conditioning, and basin shapes. These customizable
features enable the systematic evaluation and comparison of optimization
algorithms, allowing researchers to probe their strengths and weaknesses under
diverse and controllable conditions.
Related papers
- A Block-Coordinate Descent EMO Algorithm: Theoretical and Empirical Analysis [17.89683724761454]
We consider whether conditions exist under which block-coordinate descent is efficient in evolutionary multi-objective optimization.
We propose a block-coordinate version of GSEMO and compare its running time to the standard GSEMO algorithm.
arXiv Detail & Related papers (2024-04-04T23:50:18Z) - GNBG-Generated Test Suite for Box-Constrained Numerical Global
Optimization [5.804807909435654]
This document introduces a set of 24 box-constrained numerical global optimization problem instances.
Cases cover a broad spectrum of problem features, including varying degrees of modality, ruggedness, symmetry, conditioning, variable interaction structures, basin linearity, and deceptiveness.
arXiv Detail & Related papers (2023-12-12T07:40:12Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - A Pareto-optimal compositional energy-based model for sampling and
optimization of protein sequences [55.25331349436895]
Deep generative models have emerged as a popular machine learning-based approach for inverse problems in the life sciences.
These problems often require sampling new designs that satisfy multiple properties of interest in addition to learning the data distribution.
arXiv Detail & Related papers (2022-10-19T19:04:45Z) - Multi-Objective Constrained Optimization for Energy Applications via
Tree Ensembles [55.23285485923913]
Energy systems optimization problems are complex due to strongly non-linear system behavior and multiple competing objectives.
In some cases, proposed optimal solutions need to obey explicit input constraints related to physical properties or safety-critical operating conditions.
This paper proposes a novel data-driven strategy using tree ensembles for constrained multi-objective optimization of black-box problems.
arXiv Detail & Related papers (2021-11-04T20:18:55Z) - Result Diversification by Multi-objective Evolutionary Algorithms with
Theoretical Guarantees [94.72461292387146]
We propose to reformulate the result diversification problem as a bi-objective search problem, and solve it by a multi-objective evolutionary algorithm (EA)
We theoretically prove that the GSEMO can achieve the optimal-time approximation ratio, $1/2$.
When the objective function changes dynamically, the GSEMO can maintain this approximation ratio in running time, addressing the open question proposed by Borodin et al.
arXiv Detail & Related papers (2021-10-18T14:00:22Z) - Generating Large-scale Dynamic Optimization Problem Instances Using the
Generalized Moving Peaks Benchmark [9.109331015600185]
This document describes the generalized moving peaks benchmark (GMPB) and how it can be used to generate problem instances for continuous large-scale dynamic optimization problems.
It presents a set of 15 benchmark problems, the relevant source code, and a performance indicator, designed for comparative studies and competitions in large-scale dynamic optimization.
arXiv Detail & Related papers (2021-07-23T03:57:50Z) - Joint Continuous and Discrete Model Selection via Submodularity [1.332560004325655]
In model selection problems for machine learning, the desire for a well-performing model with meaningful structure is typically expressed through a regularized optimization problem.
In many scenarios, however, numerically meaningful structure is specified in some discrete space leading to difficult non optimization problems.
We show how simple continuous or discrete constraints can also be handled for certain problem classes, motivated by robust optimization.
arXiv Detail & Related papers (2021-02-17T21:14:47Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization [68.8204255655161]
EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
arXiv Detail & Related papers (2020-07-09T10:19:22Z) - Scalable and Customizable Benchmark Problems for Many-Objective
Optimization [0.0]
We propose a parameterized generator of scalable and customizable benchmark problems for many-objective problems (MaOPs)
It is able to generate problems that reproduce features present in other benchmarks and also problems with some new features.
arXiv Detail & Related papers (2020-01-26T12:39:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.