Siamese Meta-Learning and Algorithm Selection with
'Algorithm-Performance Personas' [Proposal]
- URL: http://arxiv.org/abs/2006.12328v2
- Date: Tue, 23 Jun 2020 09:27:59 GMT
- Title: Siamese Meta-Learning and Algorithm Selection with
'Algorithm-Performance Personas' [Proposal]
- Authors: Joeran Beel, Bryan Tyrell, Edward Bergman, Andrew Collins, Shahad
Nagoor
- Abstract summary: Key to algorithm selection via meta-learning is often the (meta) features, which sometimes do not provide enough information to train a meta-learner effectively.
We propose a Siamese Neural Network architecture for automated algorithm selection that focuses more on 'alike performing' instances than meta-features.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated per-instance algorithm selection often outperforms single learners.
Key to algorithm selection via meta-learning is often the (meta) features,
which sometimes though do not provide enough information to train a
meta-learner effectively. We propose a Siamese Neural Network architecture for
automated algorithm selection that focuses more on 'alike performing' instances
than meta-features. Our work includes a novel performance metric and method for
selecting training samples. We introduce further the concept of 'Algorithm
Performance Personas' that describe instances for which the single algorithms
perform alike. The concept of 'alike performing algorithms' as ground truth for
selecting training samples is novel and provides a huge potential as we
believe. In this proposal, we outline our ideas in detail and provide the first
evidence that our proposed metric is better suitable for training sample
selection that standard performance metrics such as absolute errors.
Related papers
- Algorithm Selection for Recommender Systems via Meta-Learning on Algorithm Characteristics [0.11510009152620666]
We propose a per-user meta-learning approach for recommender system selection.<n>We use both user meta-features and automatically extracted algorithm features from source code.<n>Our results show that augmenting a meta-learner with algorithm features improves its average NDCG@10 performance by 8.83%.
arXiv Detail & Related papers (2025-08-06T13:06:24Z) - How Should We Meta-Learn Reinforcement Learning Algorithms? [74.37180723338591]
We carry out an empirical comparison of the different approaches when applied to a range of meta-learned algorithms.<n>In addition to meta-train and meta-test performance, we also investigate factors including the interpretability, sample cost and train time.<n>We propose several guidelines for meta-learning new RL algorithms which will help ensure that future learned algorithms are as performant as possible.
arXiv Detail & Related papers (2025-07-23T16:31:38Z) - Meta-learning Representations for Learning from Multiple Annotators [40.886894995806955]
We propose a meta-learning method for learning from multiple noisy annotators.<n>The proposed method embeds each example in tasks to a latent space by using a neural network.<n>We show the effectiveness of our method with real-world datasets with synthetic noise and real-world crowdsourcing datasets.
arXiv Detail & Related papers (2025-06-12T00:58:37Z) - Meta-Learning from Learning Curves for Budget-Limited Algorithm Selection [11.409496019407067]
In a budget-limited scenario, it is crucial to carefully select an algorithm candidate and allocate a budget for training it.
We propose a novel framework in which an agent must select in the process of learning the most promising algorithm without waiting until it is fully trained.
arXiv Detail & Related papers (2024-10-10T08:09:58Z) - Sample Complexity of Algorithm Selection Using Neural Networks and Its Applications to Branch-and-Cut [1.4624458429745086]
We build upon recent work in this line of research by considering the setup where, instead of selecting a single algorithm that has the best performance, we allow the possibility of selecting an algorithm based on the instance to be solved.
In particular, given a representative sample of instances, we learn a neural network that maps an instance of the problem to the most appropriate algorithm for that instance.
In other words, the neural network will take as input a mixed-integer optimization instance and output a decision that will result in a small branch-and-cut tree for that instance.
arXiv Detail & Related papers (2024-02-04T03:03:27Z) - Automatic learning algorithm selection for classification via
convolutional neural networks [0.0]
The goal of this study is to learn the inherent structure of the data without identifying meta-features.
Experiments with simulated datasets show that the proposed approach achieves nearly perfect performance in identifying linear and nonlinear patterns.
arXiv Detail & Related papers (2023-05-16T01:57:01Z) - The Information Geometry of Unsupervised Reinforcement Learning [133.20816939521941]
Unsupervised skill discovery is a class of algorithms that learn a set of policies without access to a reward function.
We show that unsupervised skill discovery algorithms do not learn skills that are optimal for every possible reward function.
arXiv Detail & Related papers (2021-10-06T13:08:36Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Algorithm Selection on a Meta Level [58.720142291102135]
We introduce the problem of meta algorithm selection, which essentially asks for the best way to combine a given set of algorithm selectors.
We present a general methodological framework for meta algorithm selection as well as several concrete learning methods as instantiations of this framework.
arXiv Detail & Related papers (2021-07-20T11:23:21Z) - Meta Learning Black-Box Population-Based Optimizers [0.0]
We propose the use of meta-learning to infer population-based blackbox generalizations.
We show that the meta-loss function encourages a learned algorithm to alter its search behavior so that it can easily fit into a new context.
arXiv Detail & Related papers (2021-03-05T08:13:25Z) - Towards Meta-Algorithm Selection [78.13985819417974]
Instance-specific algorithm selection (AS) deals with the automatic selection of an algorithm from a fixed set of candidates.
We show that meta-algorithm-selection can indeed prove beneficial in some cases.
arXiv Detail & Related papers (2020-11-17T17:27:33Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Run2Survive: A Decision-theoretic Approach to Algorithm Selection based
on Survival Analysis [75.64261155172856]
survival analysis (SA) naturally supports censored data and offers appropriate ways to use such data for learning distributional models of algorithm runtime.
We leverage such models as a basis of a sophisticated decision-theoretic approach to algorithm selection, which we dub Run2Survive.
In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.
arXiv Detail & Related papers (2020-07-06T15:20:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.