Solving Kernel Ridge Regression with Gradient-Based Optimization Methods
- URL: http://arxiv.org/abs/2306.16838v5
- Date: Mon, 26 Feb 2024 10:59:48 GMT
- Title: Solving Kernel Ridge Regression with Gradient-Based Optimization Methods
- Authors: Oskar Allerbo
- Abstract summary: Kernel ridge regression, KRR, is a generalization of linear ridge regression that is non-linear in the data, but linear in the parameters.
We show theoretically and empirically how the $ell_infty$ penalties, and the corresponding gradient-based optimization algorithms, produce sparse and robust kernel regression solutions.
- Score: 1.5229257192293204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Kernel ridge regression, KRR, is a generalization of linear ridge regression
that is non-linear in the data, but linear in the parameters. Here, we
introduce an equivalent formulation of the objective function of KRR, opening
up both for using penalties other than the ridge penalty and for studying
kernel ridge regression from the perspective of gradient descent. Using a
continuous-time perspective, we derive a closed-form solution for solving
kernel regression with gradient descent, something we refer to as kernel
gradient flow, KGF, and theoretically bound the differences between KRR and
KGF, where, for the latter, regularization is obtained through early stopping.
We also generalize KRR by replacing the ridge penalty with the $\ell_1$ and
$\ell_\infty$ penalties, respectively, and use the fact that analogous to the
similarities between KGF and KRR, $\ell_1$ regularization and forward stagewise
regression (also known as coordinate descent), and $\ell_\infty$ regularization
and sign gradient descent, follow similar solution paths. We can thus alleviate
the need for computationally heavy algorithms based on proximal gradient
descent. We show theoretically and empirically how the $\ell_1$ and
$\ell_\infty$ penalties, and the corresponding gradient-based optimization
algorithms, produce sparse and robust kernel regression solutions,
respectively.
Related papers
- Convergence Analysis of Adaptive Gradient Methods under Refined Smoothness and Noise Assumptions [18.47705532817026]
Adaptive gradient methods are arguably the most successful optimization algorithms for neural network.
We show that adaptive gradient methods can potentially shave a factor Adad-ell/ell$ geometry.
arXiv Detail & Related papers (2024-06-07T02:55:57Z) - Solving Kernel Ridge Regression with Gradient Descent for a Non-Constant
Kernel [1.5229257192293204]
Kernel ridge regression, KRR, is a generalization of linear ridge regression that is non-linear in the data, but linear in the parameters.
Using the iterative approach opens up for changing the kernel during training, something that is investigated in this paper.
We propose an update scheme for the bandwidth of translational-invariant kernels, where we let the bandwidth decrease to zero during training.
arXiv Detail & Related papers (2023-11-03T07:43:53Z) - Stochastic Gradient Descent for Gaussian Processes Done Right [86.83678041846971]
We show that when emphdone right -- by which we mean using specific insights from optimisation and kernel communities -- gradient descent is highly effective.
We introduce a emphstochastic dual descent algorithm, explain its design in an intuitive manner and illustrate the design choices.
Our method places Gaussian process regression on par with state-of-the-art graph neural networks for molecular binding affinity prediction.
arXiv Detail & Related papers (2023-10-31T16:15:13Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Gradient Descent Converges Linearly for Logistic Regression on Separable
Data [17.60502131429094]
We show that running gradient descent with variable learning rate guarantees loss $f(x) leq 1.1 cdot f(x*) + epsilon$ the logistic regression objective.
We also apply our ideas to sparse logistic regression, where they lead to an exponential improvement of the sparsity-error tradeoff.
arXiv Detail & Related papers (2023-06-26T02:15:26Z) - High-probability Bounds for Non-Convex Stochastic Optimization with
Heavy Tails [55.561406656549686]
We consider non- Hilbert optimization using first-order algorithms for which the gradient estimates may have tails.
We show that a combination of gradient, momentum, and normalized gradient descent convergence to critical points in high-probability with best-known iteration for smooth losses.
arXiv Detail & Related papers (2021-06-28T00:17:01Z) - A Variance Controlled Stochastic Method with Biased Estimation for
Faster Non-convex Optimization [0.0]
We propose a new technique, em variance controlled gradient (VCSG), to improve the performance of the reduced gradient (SVRG)
$lambda$ is introduced in VCSG to avoid over-reducing a variance by SVRG.
$mathcalO(min1/epsilon3/2,n1/4/epsilon)$ number of gradient evaluations, which improves the leading gradient complexity.
arXiv Detail & Related papers (2021-02-19T12:22:56Z) - Online nonparametric regression with Sobolev kernels [99.12817345416846]
We derive the regret upper bounds on the classes of Sobolev spaces $W_pbeta(mathcalX)$, $pgeq 2, beta>fracdp$.
The upper bounds are supported by the minimax regret analysis, which reveals that in the cases $beta> fracd2$ or $p=infty$ these rates are (essentially) optimal.
arXiv Detail & Related papers (2021-02-06T15:05:14Z) - A Bregman Method for Structure Learning on Sparse Directed Acyclic
Graphs [84.7328507118758]
We develop a Bregman proximal gradient method for structure learning.
We measure the impact of curvature against a highly nonlinear iteration.
We test our method on various synthetic and real sets.
arXiv Detail & Related papers (2020-11-05T11:37:44Z) - Early stopping and polynomial smoothing in regression with reproducing
kernels [2.132096006921048]
We study the problem of early stopping for iterative learning algorithms in a reproducing kernel Hilbert space (RKHS)
We present a data-driven rule to perform early stopping without a validation set that is based on the so-called minimum discrepancy principle.
The proposed rule is proved to be minimax-optimal over different types of kernel spaces.
arXiv Detail & Related papers (2020-07-14T05:27:18Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.