Competition on Dynamic Optimization Problems Generated by Generalized Moving Peaks Benchmark (GMPB)
- URL: http://arxiv.org/abs/2106.06174v4
- Date: Tue, 10 Dec 2024 06:04:23 GMT
- Title: Competition on Dynamic Optimization Problems Generated by Generalized Moving Peaks Benchmark (GMPB)
- Authors: Danial Yazdani, Michalis Mavrovouniotis, Changhe Li, Guoyu Chen, Wenjian Luo, Mohammad Nabi Omidvar, Juergen Branke, Shengxiang Yang, Xin Yao,
- Abstract summary: The Generalized Moving Peaks Benchmark (GMPB) is a tool for generating continuous dynamic optimization problem instances.
GMPB has been used in recent Competitions on Dynamic Optimization at prestigious conferences.
- Score: 5.759341033876751
- License:
- Abstract: The Generalized Moving Peaks Benchmark (GMPB) is a tool for generating continuous dynamic optimization problem instances with controllable dynamic and morphological characteristics. GMPB has been used in recent Competitions on Dynamic Optimization at prestigious conferences, such as the IEEE Congress on Evolutionary Computation (CEC). This dynamic benchmark generator can create a wide variety of landscapes, ranging from simple unimodal to highly complex multimodal configurations and from symmetric to asymmetric forms. It also supports diverse surface textures, from smooth to highly irregular, and can generate varying levels of variable interaction and conditioning. This document provides an overview of GMPB, emphasizing how its parameters can be adjusted to produce landscapes with customizable characteristics. The MATLAB implementation of GMPB is available on the EDOLAB Platform.
Related papers
- Modeling All Response Surfaces in One for Conditional Search Spaces [69.90317997694218]
This paper proposes a novel approach to model the response surfaces of all subspaces in one.
We introduce an attention-based deep feature extractor, capable of projecting configurations with different structures from various subspaces into a unified feature space.
arXiv Detail & Related papers (2025-01-08T03:56:06Z) - SIGMA: Selective Gated Mamba for Sequential Recommendation [56.85338055215429]
Mamba, a recent advancement, has exhibited exceptional performance in time series prediction.
We introduce a new framework named Selective Gated Mamba ( SIGMA) for Sequential Recommendation.
Our results indicate that SIGMA outperforms current models on five real-world datasets.
arXiv Detail & Related papers (2024-08-21T09:12:59Z) - GNBG: A Generalized and Configurable Benchmark Generator for Continuous
Numerical Optimization [5.635586285644365]
It is crucial to use a benchmark test suite that encompasses a diverse range of problem instances with various characteristics.
Traditional benchmark suites often consist of numerous fixed test functions, making it challenging to align these with specific research objectives.
This paper introduces the Generalized Numerical Benchmark Generator (GNBG) for single-objective, box-constrained, continuous numerical optimization.
arXiv Detail & Related papers (2023-12-12T09:04:34Z) - SIGMA: Scale-Invariant Global Sparse Shape Matching [50.385414715675076]
We propose a novel mixed-integer programming (MIP) formulation for generating precise sparse correspondences for non-rigid shapes.
We show state-of-the-art results for sparse non-rigid matching on several challenging 3D datasets.
arXiv Detail & Related papers (2023-08-16T14:25:30Z) - MA-BBOB: Many-Affine Combinations of BBOB Functions for Evaluating
AutoML Approaches in Noiseless Numerical Black-Box Optimization Contexts [0.8258451067861933]
(MA-)BBOB is built on the publicly available IOHprofiler platform.
It provides access to the interactive IOHanalyzer module for performance analysis and visualization, and enables comparisons with the rich and growing data collection available for the (MA-)BBOB functions.
arXiv Detail & Related papers (2023-06-18T19:32:12Z) - Performance Embeddings: A Similarity-based Approach to Automatic
Performance Optimization [71.69092462147292]
Performance embeddings enable knowledge transfer of performance tuning between applications.
We demonstrate this transfer tuning approach on case studies in deep neural networks, dense and sparse linear algebra compositions, and numerical weather prediction stencils.
arXiv Detail & Related papers (2023-03-14T15:51:35Z) - A Pareto-optimal compositional energy-based model for sampling and
optimization of protein sequences [55.25331349436895]
Deep generative models have emerged as a popular machine learning-based approach for inverse problems in the life sciences.
These problems often require sampling new designs that satisfy multiple properties of interest in addition to learning the data distribution.
arXiv Detail & Related papers (2022-10-19T19:04:45Z) - Flexible Differentiable Optimization via Model Transformations [1.081463830315253]
We introduce DiffOpt, a Julia library to differentiate through the solution of optimization problems with respect to arbitrary parameters present in the objective and/or constraints.
arXiv Detail & Related papers (2022-06-10T09:59:13Z) - Hybrid Parameter Search and Dynamic Model Selection for Mixed-Variable
Bayesian Optimization [6.204805504959941]
We propose a new type of hybrid model for Bayesian optimization (BO) adept at managing mixed variables.
Our proposed new hybrid models (named hybridM) merge the Monte Carlo Tree Search structure (MCTS) for categorical variables with Gaussian Processes (GP) for continuous ones.
Our innovations, including dynamic online kernel selection in the surrogate modeling phase, position our hybrid models as an advancement in mixed-variable surrogate models.
arXiv Detail & Related papers (2022-06-03T06:34:09Z) - Generating Large-scale Dynamic Optimization Problem Instances Using the
Generalized Moving Peaks Benchmark [9.109331015600185]
This document describes the generalized moving peaks benchmark (GMPB) and how it can be used to generate problem instances for continuous large-scale dynamic optimization problems.
It presents a set of 15 benchmark problems, the relevant source code, and a performance indicator, designed for comparative studies and competitions in large-scale dynamic optimization.
arXiv Detail & Related papers (2021-07-23T03:57:50Z) - On the Encoder-Decoder Incompatibility in Variational Text Modeling and
Beyond [82.18770740564642]
Variational autoencoders (VAEs) combine latent variables with amortized variational inference.
We observe the encoder-decoder incompatibility that leads to poor parameterizations of the data manifold.
We propose Coupled-VAE, which couples a VAE model with a deterministic autoencoder with the same structure.
arXiv Detail & Related papers (2020-04-20T10:34:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.