Algorithm Instance Footprint: Separating Easily Solvable and Challenging
Problem Instances
- URL: http://arxiv.org/abs/2306.00479v1
- Date: Thu, 1 Jun 2023 09:28:06 GMT
- Title: Algorithm Instance Footprint: Separating Easily Solvable and Challenging
Problem Instances
- Authors: Ana Nikolikj, Sa\v{s}o D\v{z}eroski, Mario Andr\'es Mu\~noz, Carola
Doerr, Peter Koro\v{s}ec, Tome Eftimov
- Abstract summary: In black-box optimization, it is essential to understand why an algorithm instance works on a set of problem instances while failing on others.
We propose a methodology for formulating an algorithm instance footprint that consists of a set of problem instances that are easy to be solved and a set of problem instances that are difficult to be solved.
- Score: 0.7046417074932257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In black-box optimization, it is essential to understand why an algorithm
instance works on a set of problem instances while failing on others and
provide explanations of its behavior. We propose a methodology for formulating
an algorithm instance footprint that consists of a set of problem instances
that are easy to be solved and a set of problem instances that are difficult to
be solved, for an algorithm instance. This behavior of the algorithm instance
is further linked to the landscape properties of the problem instances to
provide explanations of which properties make some problem instances easy or
challenging. The proposed methodology uses meta-representations that embed the
landscape properties of the problem instances and the performance of the
algorithm into the same vector space. These meta-representations are obtained
by training a supervised machine learning regression model for algorithm
performance prediction and applying model explainability techniques to assess
the importance of the landscape features to the performance predictions. Next,
deterministic clustering of the meta-representations demonstrates that using
them captures algorithm performance across the space and detects regions of
poor and good algorithm performance, together with an explanation of which
landscape properties are leading to it.
Related papers
- A Survey of Meta-features Used for Automated Selection of Algorithms for Black-box Single-objective Continuous Optimization [4.173197621837912]
We conduct an overview of the key contributions to algorithm selection in the field of single-objective continuous black-box optimization.
We study machine learning models for automated algorithm selection, configuration, and performance prediction.
arXiv Detail & Related papers (2024-06-08T11:11:14Z) - Unlock the Power of Algorithm Features: A Generalization Analysis for Algorithm Selection [25.29451529910051]
We propose the first provable guarantee for algorithm selection based on algorithm features.
We analyze the benefits and costs associated with algorithm features and investigate how the generalization error is affected by different factors.
arXiv Detail & Related papers (2024-05-18T17:38:25Z) - On the Utility of Probing Trajectories for Algorithm-Selection [0.24475591916185496]
Machine-learning approaches to algorithm-selection typically take data describing an instance as input.
We argue that viewing algorithm-selection purely from an instance perspective can be misleading.
We propose a novel algorithm-centric' method for describing instances that can be used to train models for algorithm-selection.
arXiv Detail & Related papers (2024-01-23T13:23:59Z) - Efficient Model-Free Exploration in Low-Rank MDPs [76.87340323826945]
Low-Rank Markov Decision Processes offer a simple, yet expressive framework for RL with function approximation.
Existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions.
We propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs.
arXiv Detail & Related papers (2023-07-08T15:41:48Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Invariant Causal Mechanisms through Distribution Matching [86.07327840293894]
In this work we provide a causal perspective and a new algorithm for learning invariant representations.
Empirically we show that this algorithm works well on a diverse set of tasks and in particular we observe state-of-the-art performance on domain generalization.
arXiv Detail & Related papers (2022-06-23T12:06:54Z) - Instance Space Analysis for the Car Sequencing Problem [4.822624702094427]
We investigate what characteristics make an instance hard to solve.
We compare the performance of six algorithms for solving the car sequencing problem.
Our results show that the new algorithms are state-of-the-art.
arXiv Detail & Related papers (2020-12-18T05:26:13Z) - Data-driven Algorithm Design [21.39493074700162]
Data driven algorithm design is an important aspect of modern data science and algorithm design.
A small tweak to the parameters can cause a cascade of changes in the algorithm's behavior.
We provide strong computational and statistical performance guarantees for batch and online scenarios.
arXiv Detail & Related papers (2020-11-14T00:51:57Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z) - Active Model Estimation in Markov Decision Processes [108.46146218973189]
We study the problem of efficient exploration in order to learn an accurate model of an environment, modeled as a Markov decision process (MDP)
We show that our Markov-based algorithm outperforms both our original algorithm and the maximum entropy algorithm in the small sample regime.
arXiv Detail & Related papers (2020-03-06T16:17:24Z) - Extreme Algorithm Selection With Dyadic Feature Representation [78.13985819417974]
We propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms.
We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation.
arXiv Detail & Related papers (2020-01-29T09:40:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.