Towards Meta-learned Algorithm Selection using Implicit Fidelity
Information
- URL: http://arxiv.org/abs/2206.03130v1
- Date: Tue, 7 Jun 2022 09:14:24 GMT
- Title: Towards Meta-learned Algorithm Selection using Implicit Fidelity
Information
- Authors: Aditya Mohan, Tim Ruhkopf, Marius Lindauer
- Abstract summary: IMFAS produces informative landmarks, easily enriched by arbitrary meta-features at a low computational cost.
We show it is able to beat Successive Halving with at most half the fidelity sequence during test time.
- Score: 13.750624267664156
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatically selecting the best performing algorithm for a given dataset or
ranking multiple of them by their expected performance supports users in
developing new machine learning applications. Most approaches for this problem
rely on dataset meta-features and landmarking performances to capture the
salient topology of the datasets and those topologies that the algorithms
attend to. Landmarking usually exploits cheap algorithms not necessarily in the
pool of candidate algorithms to get inexpensive approximations of the topology.
While somewhat indicative, handcrafted dataset meta-features and landmarks are
likely insufficient descriptors, strongly depending on the alignment of the
geometries the landmarks and candidates search for. We propose IMFAS, a method
to exploit multi-fidelity landmarking information directly from the candidate
algorithms in the form of non-parametrically non-myopic meta-learned learning
curves via LSTM networks in a few-shot setting during testing. Using this
mechanism, IMFAS jointly learns the topology of of the datasets and the
inductive biases of algorithms without expensively training them to
convergence. IMFAS produces informative landmarks, easily enriched by arbitrary
meta-features at a low computational cost, capable of producing the desired
ranking using cheaper fidelities. We additionally show that it is able to beat
Successive Halving with at most half the fidelity sequence during test time
Related papers
- Meta-Learning from Learning Curves for Budget-Limited Algorithm Selection [11.409496019407067]
In a budget-limited scenario, it is crucial to carefully select an algorithm candidate and allocate a budget for training it.
We propose a novel framework in which an agent must select in the process of learning the most promising algorithm without waiting until it is fully trained.
arXiv Detail & Related papers (2024-10-10T08:09:58Z) - Minimally Supervised Learning using Topological Projections in
Self-Organizing Maps [55.31182147885694]
We introduce a semi-supervised learning approach based on topological projections in self-organizing maps (SOMs)
Our proposed method first trains SOMs on unlabeled data and then a minimal number of available labeled data points are assigned to key best matching units (BMU)
Our results indicate that the proposed minimally supervised model significantly outperforms traditional regression techniques.
arXiv Detail & Related papers (2024-01-12T22:51:48Z) - Towards Automated Imbalanced Learning with Deep Hierarchical
Reinforcement Learning [57.163525407022966]
Imbalanced learning is a fundamental challenge in data mining, where there is a disproportionate ratio of training samples in each class.
Over-sampling is an effective technique to tackle imbalanced learning through generating synthetic samples for the minority class.
We propose AutoSMOTE, an automated over-sampling algorithm that can jointly optimize different levels of decisions.
arXiv Detail & Related papers (2022-08-26T04:28:01Z) - One-Pass Learning via Bridging Orthogonal Gradient Descent and Recursive
Least-Squares [8.443742714362521]
We develop an algorithm for one-pass learning which seeks to perfectly fit every new datapoint while changing the parameters in a direction that causes the least change to the predictions on previous datapoints.
Our algorithm uses the memory efficiently by exploiting the structure of the streaming data via an incremental principal component analysis (IPCA)
Our experiments show the effectiveness of the proposed method compared to the baselines.
arXiv Detail & Related papers (2022-07-28T02:01:31Z) - Memory-Based Optimization Methods for Model-Agnostic Meta-Learning and
Personalized Federated Learning [56.17603785248675]
Model-agnostic meta-learning (MAML) has become a popular research area.
Existing MAML algorithms rely on the episode' idea by sampling a few tasks and data points to update the meta-model at each iteration.
This paper proposes memory-based algorithms for MAML that converge with vanishing error.
arXiv Detail & Related papers (2021-06-09T08:47:58Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - Semi-Supervised Learning with Meta-Gradient [123.26748223837802]
We propose a simple yet effective meta-learning algorithm in semi-supervised learning.
We find that the proposed algorithm performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-07-08T08:48:56Z) - FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
to Non-IID Data [59.50904660420082]
Federated Learning (FL) has become a popular paradigm for learning from distributed data.
To effectively utilize data at different devices without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a "computation then aggregation" (CTA) model.
arXiv Detail & Related papers (2020-05-22T23:07:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.