Fast Kernel Methods for Generic Lipschitz Losses via $p$-Sparsified
Sketches
- URL: http://arxiv.org/abs/2206.03827v7
- Date: Mon, 6 Nov 2023 12:41:16 GMT
- Title: Fast Kernel Methods for Generic Lipschitz Losses via $p$-Sparsified
Sketches
- Authors: Tamim El Ahmad, Pierre Laforgue, Florence d'Alch\'e-Buc
- Abstract summary: Kernel methods are learning algorithms that enjoy solid theoretical foundations while suffering from important computational limitations.
We show that sparsified Gaussian (and Rademacher) sketches still produce theoretically-valid approximations.
We derive excess risk bounds for both single and multiple output kernel problems, with generic Lipschitz losses.
- Score: 3.3379026542599934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Kernel methods are learning algorithms that enjoy solid theoretical
foundations while suffering from important computational limitations.
Sketching, which consists in looking for solutions among a subspace of reduced
dimension, is a well studied approach to alleviate these computational burdens.
However, statistically-accurate sketches, such as the Gaussian one, usually
contain few null entries, such that their application to kernel methods and
their non-sparse Gram matrices remains slow in practice. In this paper, we show
that sparsified Gaussian (and Rademacher) sketches still produce
theoretically-valid approximations while allowing for important time and space
savings thanks to an efficient \emph{decomposition trick}. To support our
method, we derive excess risk bounds for both single and multiple output kernel
problems, with generic Lipschitz losses, hereby providing new guarantees for a
wide range of applications, from robust regression to multiple quantile
regression. Our theoretical results are complemented with experiments showing
the empirical superiority of our approach over SOTA sketching methods.
Related papers
- Inertial Quadratic Majorization Minimization with Application to Kernel Regularized Learning [1.0282274843007797]
We introduce the Quadratic Majorization Minimization with Extrapolation (QMME) framework and establish its sequential convergence properties.<n>To demonstrate practical advantages, we apply QMME to large-scale kernel regularized learning problems.
arXiv Detail & Related papers (2025-07-06T05:17:28Z) - Variance Reduction Methods Do Not Need to Compute Full Gradients: Improved Efficiency through Shuffling [44.31966204357333]
We develop memory-efficient algorithms for large-scale machine learning problems.
We use two key techniques to make our approach memory-efficient and avoid full computations.
arXiv Detail & Related papers (2025-02-20T15:37:45Z) - Riemannian stochastic optimization methods avoid strict saddle points [68.80251170757647]
We show that policies under study avoid strict saddle points / submanifolds with probability 1.
This result provides an important sanity check as it shows that, almost always, the limit state of an algorithm can only be a local minimizer.
arXiv Detail & Related papers (2023-11-04T11:12:24Z) - Stochastic Gradient Descent for Gaussian Processes Done Right [86.83678041846971]
We show that when emphdone right -- by which we mean using specific insights from optimisation and kernel communities -- gradient descent is highly effective.
We introduce a emphstochastic dual descent algorithm, explain its design in an intuitive manner and illustrate the design choices.
Our method places Gaussian process regression on par with state-of-the-art graph neural networks for molecular binding affinity prediction.
arXiv Detail & Related papers (2023-10-31T16:15:13Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - Infeasible Deterministic, Stochastic, and Variance-Reduction Algorithms for Optimization under Orthogonality Constraints [9.301728976515255]
This article provides new practical and theoretical developments for the landing algorithm.
First, the method is extended to the Stiefel manifold.
We also consider variance reduction algorithms when the cost function is an average of many functions.
arXiv Detail & Related papers (2023-03-29T07:36:54Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om
Method [35.62926659320816]
We introduce Skyformer, which replaces the softmax structure with a Gaussian kernel to stabilize the model training and adapts the Nystr"om method to accelerate the computation.
Experiments on Long Range Arena benchmark show that the proposed method is sufficient in getting comparable or even better performance than the full self-attention.
arXiv Detail & Related papers (2021-10-29T18:28:49Z) - Progressive Batching for Efficient Non-linear Least Squares [31.082253632197023]
Most improvements of the basic Gauss-Newton tackle convergence guarantees or leverage the sparsity of the underlying problem structure for computational speedup.
Our work borrows ideas from both machine learning and statistics, and we present an approach for non-linear least-squares that guarantees convergence while at the same time significantly reduces the required amount of computation.
arXiv Detail & Related papers (2020-10-21T13:00:04Z) - Large-Scale Methods for Distributionally Robust Optimization [53.98643772533416]
We prove that our algorithms require a number of evaluations gradient independent of training set size and number of parameters.
Experiments on MNIST and ImageNet confirm the theoretical scaling of our algorithms, which are 9--36 times more efficient than full-batch methods.
arXiv Detail & Related papers (2020-10-12T17:41:44Z) - On Distributed Non-convex Optimization: Projected Subgradient Method For
Weakly Convex Problems in Networks [13.385373310554327]
The Moreau subgradient method converges linear sharpness problems in machine learning.
A distributed implementation of the subgradient method with a theoretical guarantee is proposed.
arXiv Detail & Related papers (2020-04-28T01:01:49Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.