Explainable Benchmarking for Iterative Optimization Heuristics
- URL: http://arxiv.org/abs/2401.17842v2
- Date: Fri, 23 Feb 2024 09:11:37 GMT
- Title: Explainable Benchmarking for Iterative Optimization Heuristics
- Authors: Niki van Stein, Diederick Vermetten, Anna V. Kononova, Thomas B\"ack
- Abstract summary: We introduce the IOH-Xplainer software framework, for analyzing and understanding the performance of various optimization algorithms.
We examine the impact of different algorithmic components and configurations, offering insights into their performance across diverse scenarios.
- Score: 0.8192907805418583
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Benchmarking heuristic algorithms is vital to understand under which
conditions and on what kind of problems certain algorithms perform well. In
most current research into heuristic optimization algorithms, only a very
limited number of scenarios, algorithm configurations and hyper-parameter
settings are explored, leading to incomplete and often biased insights and
results. This paper presents a novel approach we call explainable benchmarking.
Introducing the IOH-Xplainer software framework, for analyzing and
understanding the performance of various optimization algorithms and the impact
of their different components and hyper-parameters. We showcase the framework
in the context of two modular optimization frameworks. Through this framework,
we examine the impact of different algorithmic components and configurations,
offering insights into their performance across diverse scenarios. We provide a
systematic method for evaluating and interpreting the behaviour and efficiency
of iterative optimization heuristics in a more transparent and comprehensible
manner, allowing for better benchmarking and algorithm design.
Related papers
- Beyond Single-Model Views for Deep Learning: Optimization versus
Generalizability of Stochastic Optimization Algorithms [13.134564730161983]
This paper adopts a novel approach to deep learning optimization, focusing on gradient descent (SGD) and its variants.
We show that SGD and its variants demonstrate performance on par with flat-minimas like SAM, albeit with half the gradient evaluations.
Our study uncovers several key findings regarding the relationship between training loss and hold-out accuracy, as well as the comparable performance of SGD and noise-enabled variants.
arXiv Detail & Related papers (2024-03-01T14:55:22Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Benchmarking Algorithms for Submodular Optimization Problems Using
IOHProfiler [22.08617448389877]
This paper introduces a setup for benchmarking algorithms for submodular optimization problems.
The focus is on the development of iterative search algorithms with the implementation provided and integrated into IOHprofiler.
We present a range of submodular optimization problems that have been integrated into IOHprofiler and show how the setup can be used for analyzing and comparing iterative search algorithms in various settings.
arXiv Detail & Related papers (2023-02-02T23:36:23Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Neural Combinatorial Optimization: a New Player in the Field [69.23334811890919]
This paper presents a critical analysis on the incorporation of algorithms based on neural networks into the classical optimization framework.
A comprehensive study is carried out to analyse the fundamental aspects of such algorithms, including performance, transferability, computational cost and to larger-sized instances.
arXiv Detail & Related papers (2022-05-03T07:54:56Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - Optimum-statistical Collaboration Towards General and Efficient
Black-box Optimization [23.359363844344408]
We introduce an algorithm framework of managing the interaction between optimization error flux and statistical error flux evolving in the optimization process.
Our framework and its analysis can be applied to a large family of functions and partitions that satisfy different local smoothness assumptions.
In theory, we prove the algorithm enjoys rate-optimal regret bounds under different local smoothness assumptions.
arXiv Detail & Related papers (2021-06-17T02:37:39Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - A Unifying View of Optimism in Episodic Reinforcement Learning [18.73198634652064]
This paper provides a framework for designing, analyzing and implementing optimistic reinforcement learning algorithms.
Every model-optimistic algorithm that constructs an optimistic MDP has an equivalent representation as a value-optimistic dynamic programming algorithm.
arXiv Detail & Related papers (2020-07-03T18:10:30Z) - Benchmarking for Metaheuristic Black-Box Optimization: Perspectives and
Open Challenges [0.0]
Research on new optimization algorithms is often funded based on the motivation that such algorithms might improve the capabilities to deal with real-world and industrially relevant challenges.
A large number of test problems and benchmark suites have been developed and used for comparative assessments of algorithms.
arXiv Detail & Related papers (2020-07-01T15:09:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.