Efficient Hyperparameter Tuning for Large Scale Kernel Ridge Regression
- URL: http://arxiv.org/abs/2201.06314v1
- Date: Mon, 17 Jan 2022 09:57:32 GMT
- Title: Efficient Hyperparameter Tuning for Large Scale Kernel Ridge Regression
- Authors: Giacomo Meanti, Luigi Carratino, Ernesto De Vito, Lorenzo Rosasco
- Abstract summary: We propose a complexity regularization criterion based on a data dependent penalty, and discuss its efficient optimization.
Our analysis shows the benefit of the proposed approach, that we hence incorporate in a library for large scale kernel methods to derive adaptively tuned solutions.
- Score: 19.401624974011746
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Kernel methods provide a principled approach to nonparametric learning. While
their basic implementations scale poorly to large problems, recent advances
showed that approximate solvers can efficiently handle massive datasets. A
shortcoming of these solutions is that hyperparameter tuning is not taken care
of, and left for the user to perform. Hyperparameters are crucial in practice
and the lack of automated tuning greatly hinders efficiency and usability. In
this paper, we work to fill in this gap focusing on kernel ridge regression
based on the Nystr\"om approximation. After reviewing and contrasting a number
of hyperparameter tuning strategies, we propose a complexity regularization
criterion based on a data dependent penalty, and discuss its efficient
optimization. Then, we proceed to a careful and extensive empirical evaluation
highlighting strengths and weaknesses of the different tuning strategies. Our
analysis shows the benefit of the proposed approach, that we hence incorporate
in a library for large scale kernel methods to derive adaptively tuned
solutions.
Related papers
- Adaptive Preference Scaling for Reinforcement Learning with Human Feedback [103.36048042664768]
Reinforcement learning from human feedback (RLHF) is a prevalent approach to align AI systems with human values.
We propose a novel adaptive preference loss, underpinned by distributionally robust optimization (DRO)
Our method is versatile and can be readily adapted to various preference optimization frameworks.
arXiv Detail & Related papers (2024-06-04T20:33:22Z) - Tune As You Scale: Hyperparameter Optimization For Compute Efficient
Training [0.0]
We propose a practical method for robustly tuning large models.
CarBS performs local search around the performance-cost frontier.
Among our results, we effectively solve the entire ProcGen benchmark just by tuning a simple baseline.
arXiv Detail & Related papers (2023-06-13T18:22:24Z) - META-STORM: Generalized Fully-Adaptive Variance Reduced SGD for
Unbounded Functions [23.746620619512573]
Recent work overcomes the effect of having to compute gradients of "megabatches"
Work is widely used after update with competitive deep learning tasks.
arXiv Detail & Related papers (2022-09-29T15:12:54Z) - Provably tuning the ElasticNet across instances [53.0518090093538]
We consider the problem of tuning the regularization parameters of Ridge regression, LASSO, and the ElasticNet across multiple problem instances.
Our results are the first general learning-theoretic guarantees for this important class of problems.
arXiv Detail & Related papers (2022-07-20T21:22:40Z) - STORM+: Fully Adaptive SGD with Momentum for Nonconvex Optimization [74.1615979057429]
We investigate non-batch optimization problems where the objective is an expectation over smooth loss functions.
Our work builds on the STORM algorithm, in conjunction with a novel approach to adaptively set the learning rate and momentum parameters.
arXiv Detail & Related papers (2021-11-01T15:43:36Z) - Efficient Hyperparameter Tuning with Dynamic Accuracy Derivative-Free
Optimization [0.27074235008521236]
We apply a recent dynamic accuracy derivative-free optimization method to hyperparameter tuning.
This method allows inexact evaluations of the learning problem while retaining convergence guarantees.
We demonstrate its robustness and efficiency compared to a fixed accuracy approach.
arXiv Detail & Related papers (2020-11-06T00:59:51Z) - Optimizing generalization on the train set: a novel gradient-based
framework to train parameters and hyperparameters simultaneously [0.0]
Generalization is a central problem in Machine Learning.
We present a novel approach based on a new measure of risk that allows us to develop novel fully automatic procedures for generalization.
arXiv Detail & Related papers (2020-06-11T18:04:36Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z) - Implicit differentiation of Lasso-type models for hyperparameter
optimization [82.73138686390514]
We introduce an efficient implicit differentiation algorithm, without matrix inversion, tailored for Lasso-type problems.
Our approach scales to high-dimensional data by leveraging the sparsity of the solutions.
arXiv Detail & Related papers (2020-02-20T18:43:42Z) - Online Parameter Estimation for Safety-Critical Systems with Gaussian
Processes [6.122161391301866]
We present a Bayesian optimization framework based on Gaussian processes (GPs) for online parameter estimation.
It uses an efficient search strategy over a response surface in the parameter space for finding the global optima with minimal function evaluations.
We demonstrate our technique on an actuated planar pendulum and safety-critical quadrotor in simulation with changing parameters.
arXiv Detail & Related papers (2020-02-18T20:38:00Z) - Support recovery and sup-norm convergence rates for sparse pivotal
estimation [79.13844065776928]
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.
We show minimax sup-norm convergence rates for non smoothed and smoothed, single task and multitask square-root Lasso-type estimators.
arXiv Detail & Related papers (2020-01-15T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.