Generating Instances with Performance Differences for More Than Just Two
Algorithms
- URL: http://arxiv.org/abs/2104.14275v1
- Date: Thu, 29 Apr 2021 11:48:41 GMT
- Title: Generating Instances with Performance Differences for More Than Just Two
Algorithms
- Authors: Jakob Bossek, Markus Wagner
- Abstract summary: We propose fitness-functions to evolve instances that show large performance differences for more than just two algorithms simultaneously.
As a proof-of-principle, we evolve instances of the multi-component Traveling Thief Problem(TTP) for three incomplete TTP-solvers.
- Score: 2.061388741385401
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, Evolutionary Algorithms (EAs) have frequently been adopted
to evolve instances for optimization problems that pose difficulties for one
algorithm while being rather easy for a competitor and vice versa. Typically,
this is achieved by either minimizing or maximizing the performance difference
or ratio which serves as the fitness function. Repeating this process is useful
to gain insights into strengths/weaknesses of certain algorithms or to build a
set of instances with strong performance differences as a foundation for
automatic per-instance algorithm selection or configuration. We contribute to
this branch of research by proposing fitness-functions to evolve instances that
show large performance differences for more than just two algorithms
simultaneously. As a proof-of-principle, we evolve instances of the
multi-component Traveling Thief Problem~(TTP) for three incomplete TTP-solvers.
Our results point out that our strategies are promising, but unsurprisingly
their success strongly relies on the algorithms' performance complementarity.
Related papers
- Provably Faster Algorithms for Bilevel Optimization via Without-Replacement Sampling [96.47086913559289]
gradient-based algorithms are widely used in bilevel optimization.
We introduce a without-replacement sampling based algorithm which achieves a faster convergence rate.
We validate our algorithms over both synthetic and real-world applications.
arXiv Detail & Related papers (2024-11-07T17:05:31Z) - Evolving Reliable Differentiating Constraints for the Chance-constrained Maximum Coverage Problem [8.98161858972433]
We study the classical maximum coverage problem in graphs with chance constraints.
Our goal is to evolve reliable chance constraint settings for a given graph where the performance of algorithms differs significantly.
We develop an evolutionary algorithm that provides sets of chance constraints that differentiate the performance of two search algorithms with high confidence.
arXiv Detail & Related papers (2024-05-29T05:22:31Z) - Performance of Genetic Algorithms in the Context of Software Model
Refactoring [1.3812010983144802]
We conduct a performance analysis of three genetic algorithms to compare them in terms of performance and quality of solutions.
Results show that there are significant differences in performance among the algorithms.
arXiv Detail & Related papers (2023-08-26T13:25:42Z) - A Simple Evolutionary Algorithm for Multi-modal Multi-objective
Optimization [0.0]
We introduce a steady-state evolutionary algorithm for solving multi-modal, multi-objective optimization problems (MMOPs)
We report its performance on 21 MMOPs from various test suites that are widely used for benchmarking using a low computational budget of 1000 function evaluations.
arXiv Detail & Related papers (2022-01-18T03:31:11Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Identifying Co-Adaptation of Algorithmic and Implementational
Innovations in Deep Reinforcement Learning: A Taxonomy and Case Study of
Inference-based Algorithms [15.338931971492288]
We focus on a series of inference-based actor-critic algorithms to decouple their algorithmic innovations and implementation decisions.
We identify substantial performance drops whenever implementation details are mismatched for algorithmic choices.
Results show which implementation details are co-adapted and co-evolved with algorithms.
arXiv Detail & Related papers (2021-03-31T17:55:20Z) - Generalization in portfolio-based algorithm selection [97.74604695303285]
We provide the first provable guarantees for portfolio-based algorithm selection.
We show that if the portfolio is large, overfitting is inevitable, even with an extremely simple algorithm selector.
arXiv Detail & Related papers (2020-12-24T16:33:17Z) - Efficient Computation of Expectations under Spanning Tree Distributions [67.71280539312536]
We propose unified algorithms for the important cases of first-order expectations and second-order expectations in edge-factored, non-projective spanning-tree models.
Our algorithms exploit a fundamental connection between gradients and expectations, which allows us to derive efficient algorithms.
arXiv Detail & Related papers (2020-08-29T14:58:26Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Towards Dynamic Algorithm Selection for Numerical Black-Box
Optimization: Investigating BBOB as a Use Case [4.33419118449588]
We show that even single-switch dynamic Algorithm selection (dynAS) can potentially result in significant performance gains.
We also discuss key challenges in dynAS, and argue that the BBOB-framework can become a useful tool in overcoming these.
arXiv Detail & Related papers (2020-06-11T16:36:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.