fastkqr: A Fast Algorithm for Kernel Quantile Regression
- URL: http://arxiv.org/abs/2408.05393v1
- Date: Sat, 10 Aug 2024 00:18:56 GMT
- Title: fastkqr: A Fast Algorithm for Kernel Quantile Regression
- Authors: Qian Tang, Yuwen Gu, Boxiang Wang,
- Abstract summary: We introduce fastkqr, which significantly advances the computation of quantile regression in reproducing kernel Hilbert spaces.
The core of fastkqr is a finite smoothing algorithm that magically produces exact regression quantiles, rather than approximations.
In addition, we extend fastkqr to accommodate a flexible kernel quantile regression with a data-driven crossing penalty.
- Score: 6.850636409964172
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Quantile regression is a powerful tool for robust and heterogeneous learning that has seen applications in a diverse range of applied areas. However, its broader application is often hindered by the substantial computational demands arising from the non-smooth quantile loss function. In this paper, we introduce a novel algorithm named fastkqr, which significantly advances the computation of quantile regression in reproducing kernel Hilbert spaces. The core of fastkqr is a finite smoothing algorithm that magically produces exact regression quantiles, rather than approximations. To further accelerate the algorithm, we equip fastkqr with a novel spectral technique that carefully reutilizes matrix computations. In addition, we extend fastkqr to accommodate a flexible kernel quantile regression with a data-driven crossing penalty, addressing the interpretability challenges of crossing quantile curves at multiple levels. We have implemented fastkqr in a publicly available R package. Extensive simulations and real applications show that fastkqr matches the accuracy of state-of-the-art algorithms but can operate up to an order of magnitude faster.
Related papers
- A Pathwise Coordinate Descent Algorithm for LASSO Penalized Quantile Regression [0.6445605125467572]
We develop a fast, pathwise coordinate descent algorithm to compute exact penalized quantile regression estimates for high-dimensional data.
Our algorithm runs substantially faster than existing alternatives based on approximate CD and linear program.
arXiv Detail & Related papers (2025-02-17T22:57:41Z) - Optimal multicore quantum computing with few interconnects [0.0]
We study the complexity behavior of a paradigmatic universal family of random circuits distributed into several cores.
We find that the optimal complexity is reached with few interconnects.
We also analyze the complexity properties when scaling processors up by means of adding cores of the same size.
arXiv Detail & Related papers (2025-01-17T15:23:54Z) - Adaptive Sampled Softmax with Inverted Multi-Index: Methods, Theory and Applications [79.53938312089308]
The MIDX-Sampler is a novel adaptive sampling strategy based on an inverted multi-index approach.
Our method is backed by rigorous theoretical analysis, addressing key concerns such as sampling bias, gradient bias, convergence rates, and generalization error bounds.
arXiv Detail & Related papers (2025-01-15T04:09:21Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Quantum speedup of leverage score sampling and its application [0.0]
In this paper, we propose a quantum algorithm to accelerate the computation of leverage scores.
As an application, we propose a new quantum algorithm for rigid regression problems with vector solution outputs.
arXiv Detail & Related papers (2023-01-15T14:40:18Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Exponential Error Convergence in Data Classification with Optimized
Random Features: Acceleration by Quantum Machine Learning [8.98526174345299]
An algorithm for machine learning by quantum computer, quantum machine learning (QML), can exponentially speed up sampling of optimized random features.
We here construct a QML algorithm for a classification task accelerated by the optimized random features.
We prove that the QML algorithm for optimized random features, combined with gradient descent (SGD), can achieve state-of-the-art exponential convergence speed.
arXiv Detail & Related papers (2021-06-16T18:00:00Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Quantum-Inspired Classical Algorithm for Principal Component Regression [1.9105479266011323]
We develop an algorithm for principal component regression that runs in time polylogarithmic to the number of data points.
This exponential speed up allows for potential applications in much larger data sets.
arXiv Detail & Related papers (2020-10-16T20:50:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.