Sensitivity Analysis of RF+clust for Leave-one-problem-out Performance
Prediction
- URL: http://arxiv.org/abs/2305.19375v1
- Date: Tue, 30 May 2023 19:31:31 GMT
- Title: Sensitivity Analysis of RF+clust for Leave-one-problem-out Performance
Prediction
- Authors: Ana Nikolikj, Michal Pluh\'a\v{c}ek, Carola Doerr, Peter Koro\v{s}ec,
and Tome Eftimov
- Abstract summary: Left-one-problem-out (LOPO) performance prediction requires machine learning (ML) models to extrapolate algorithms' performance from a set of training problems to a previously unseen problem.
Recent work suggested enriching standard random forest (RF) performance regression models with a weighted average of algorithms' performance on training problems that are considered similar to a test problem.
Here, we extend the RF+clust approach by adjusting the distance-based weights with the importance of the features for performance regression.
- Score: 0.7046417074932257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Leave-one-problem-out (LOPO) performance prediction requires machine learning
(ML) models to extrapolate algorithms' performance from a set of training
problems to a previously unseen problem. LOPO is a very challenging task even
for state-of-the-art approaches. Models that work well in the easier
leave-one-instance-out scenario often fail to generalize well to the LOPO
setting. To address the LOPO problem, recent work suggested enriching standard
random forest (RF) performance regression models with a weighted average of
algorithms' performance on training problems that are considered similar to a
test problem. More precisely, in this RF+clust approach, the weights are chosen
proportionally to the distances of the problems in some feature space. Here in
this work, we extend the RF+clust approach by adjusting the distance-based
weights with the importance of the features for performance regression. That
is, instead of considering cosine distance in the feature space, we consider a
weighted distance measure, with weights depending on the relevance of the
feature for the regression model. Our empirical evaluation of the modified
RF+clust approach on the CEC 2014 benchmark suite confirms its advantages over
the naive distance measure. However, we also observe room for improvement, in
particular with respect to more expressive feature portfolios.
Related papers
- Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - RF+clust for Leave-One-Problem-Out Performance Prediction [0.9281671380673306]
We study leave-one-problem-out (LOPO) performance prediction.
We analyze whether standard random forest (RF) model predictions can be improved by calibrating them with a weighted average of performance values.
arXiv Detail & Related papers (2023-01-23T16:14:59Z) - Adaptive LASSO estimation for functional hidden dynamic geostatistical
model [69.10717733870575]
We propose a novel model selection algorithm based on a penalized maximum likelihood estimator (PMLE) for functional hiddenstatistical models (f-HD)
The algorithm is based on iterative optimisation and uses an adaptive least absolute shrinkage and selector operator (GMSOLAS) penalty function, wherein the weights are obtained by the unpenalised f-HD maximum-likelihood estimators.
arXiv Detail & Related papers (2022-08-10T19:17:45Z) - Adaptive Self-supervision Algorithms for Physics-informed Neural
Networks [59.822151945132525]
Physics-informed neural networks (PINNs) incorporate physical knowledge from the problem domain as a soft constraint on the loss function.
We study the impact of the location of the collocation points on the trainability of these models.
We propose a novel adaptive collocation scheme which progressively allocates more collocation points to areas where the model is making higher errors.
arXiv Detail & Related papers (2022-07-08T18:17:06Z) - RoMA: Robust Model Adaptation for Offline Model-based Optimization [115.02677045518692]
We consider the problem of searching an input maximizing a black-box objective function given a static dataset of input-output queries.
A popular approach to solving this problem is maintaining a proxy model that approximates the true objective function.
Here, the main challenge is how to avoid adversarially optimized inputs during the search.
arXiv Detail & Related papers (2021-10-27T05:37:12Z) - Reducing the Amortization Gap in Variational Autoencoders: A Bayesian
Random Function Approach [38.45568741734893]
Inference in our GP model is done by a single feed forward pass through the network, significantly faster than semi-amortized methods.
We show that our approach attains higher test data likelihood than the state-of-the-arts on several benchmark datasets.
arXiv Detail & Related papers (2021-02-05T13:01:12Z) - Robust priors for regularized regression [12.945710636153537]
Penalized regression approaches like ridge regression shrink toward zero but zero weights is usually not a sensible prior.
Inspired by simple and robust decisions humans use, we constructed non-zero priors for penalized regression models.
Models with robust priors had excellent worst-case performance.
arXiv Detail & Related papers (2020-10-06T10:43:14Z) - Learning a Unified Sample Weighting Network for Object Detection [113.98404690619982]
Region sampling or weighting is significantly important to the success of modern region-based object detectors.
We argue that sample weighting should be data-dependent and task-dependent.
We propose a unified sample weighting network to predict a sample's task weights.
arXiv Detail & Related papers (2020-06-11T16:19:16Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.