A Complementarity Analysis of the COCO Benchmark Problems and
Artificially Generated Problems
- URL: http://arxiv.org/abs/2104.13060v1
- Date: Tue, 27 Apr 2021 09:18:43 GMT
- Title: A Complementarity Analysis of the COCO Benchmark Problems and
Artificially Generated Problems
- Authors: Urban \v{S}kvorc, Tome Eftimov, Peter Koro\v{s}ec
- Abstract summary: In this paper, one such single-objective continuous problem generation approach is analyzed and compared with the COCO benchmark problem set.
We show that such representations allow us to further explore the relations between the problems by applying visualization and correlation analysis techniques.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When designing a benchmark problem set, it is important to create a set of
benchmark problems that are a good generalization of the set of all possible
problems. One possible way of easing this difficult task is by using
artificially generated problems. In this paper, one such single-objective
continuous problem generation approach is analyzed and compared with the COCO
benchmark problem set, a well know problem set for benchmarking numerical
optimization algorithms. Using Exploratory Landscape Analysis and Singular
Value Decomposition, we show that such representations allow us to further
explore the relations between the problems by applying visualization and
correlation analysis techniques, with the goal of decreasing the bias in
benchmark problem assessment.
Related papers
- Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - Absolute Ranking: An Essential Normalization for Benchmarking Optimization Algorithms [0.0]
evaluating performance across optimization algorithms on many problems presents a complex challenge due to the diversity of numerical scales involved.
This paper extensively explores the problem, making a compelling case to underscore the issue and conducting a thorough analysis of its root causes.
Building on this research, this paper introduces a new mathematical model called "absolute ranking" and a sampling-based computational method.
arXiv Detail & Related papers (2024-09-06T00:55:03Z) - A Scalable Test Problem Generator for Sequential Transfer Optimization [32.171233314036286]
Sequential transfer optimization (STO) aims to improve the optimization performance on a task of interest by exploiting previously-solved optimization tasks stored in a database.
Existing test problems are either simply generated by assembling other benchmark functions or extended from specific practical problems with limited scalability.
In this study, we first introduce four concepts for characterizing STO problems and present an important problem feature, namely similarity distribution.
arXiv Detail & Related papers (2023-04-17T06:48:07Z) - Continuous-time Analysis for Variational Inequalities: An Overview and
Desiderata [87.77379512999818]
We provide an overview of recent progress in the use of continuous-time perspectives in the analysis and design of methods targeting the broad VI problem class.
Our presentation draws parallels between single-objective problems and multi-objective problems, highlighting the challenges of the latter.
We also formulate various desiderata for algorithms that apply to general VIs and we argue that achieving these desiderata may profit from an understanding of the associated continuous-time dynamics.
arXiv Detail & Related papers (2022-07-14T17:58:02Z) - Multi-task Learning of Order-Consistent Causal Graphs [59.9575145128345]
We consider the problem of discovering $K related Gaussian acyclic graphs (DAGs)
Under multi-task learning setting, we propose a $l_1/l$-regularized maximum likelihood estimator (MLE) for learning $K$ linear structural equation models.
We theoretically show that the joint estimator, by leveraging data across related tasks, can achieve a better sample complexity for recovering the causal order.
arXiv Detail & Related papers (2021-11-03T22:10:18Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - On the Difficulty of Generalizing Reinforcement Learning Framework for
Combinatorial Optimization [6.935838847004389]
Combinatorial optimization problems (COPs) on the graph with real-life applications are canonical challenges in Computer Science.
The underlying principle of this approach is to deploy a graph neural network (GNN) for encoding both the local information of the nodes and the graph-structured data.
We use the security-aware phone clone allocation in the cloud as a classical quadratic assignment problem (QAP) to investigate whether or not deep RL-based model is generally applicable to solve other classes of such hard problems.
arXiv Detail & Related papers (2021-08-08T19:12:04Z) - USCO-Solver: Solving Undetermined Stochastic Combinatorial Optimization
Problems [9.015720257837575]
We consider the regression between spaces, aiming to infer high-quality optimization solutions from samples of input-solution pairs.
For learning foundations, we present learning-error analysis under the PAC-Bayesian framework.
We obtain highly encouraging experimental results for several classic problems on both synthetic and real-world datasets.
arXiv Detail & Related papers (2021-07-15T17:59:08Z) - Runtime Analysis of RLS and the (1+1) EA for the Chance-constrained
Knapsack Problem with Correlated Uniform Weights [15.402666674186936]
We perform runtime analysis of a randomized search algorithm (RSA) and a basic evolutionary algorithm (EA) for the chance-constrained knapsack problem with correlated uniform weights.
We show how the weight correlations and the different types of profit profiles influence the runtime behavior of both algorithms in the chance-constrained setting.
arXiv Detail & Related papers (2021-02-10T23:40:01Z) - Total Deep Variation: A Stable Regularizer for Inverse Problems [71.90933869570914]
We introduce the data-driven general-purpose total deep variation regularizer.
In its core, a convolutional neural network extracts local features on multiple scales and in successive blocks.
We achieve state-of-the-art results for numerous imaging tasks.
arXiv Detail & Related papers (2020-06-15T21:54:15Z) - Total Deep Variation for Linear Inverse Problems [71.90933869570914]
We propose a novel learnable general-purpose regularizer exploiting recent architectural design patterns from deep learning.
We show state-of-the-art performance for classical image restoration and medical image reconstruction problems.
arXiv Detail & Related papers (2020-01-14T19:01:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.