Nearly Linear Row Sampling Algorithm for Quantile Regression
- URL: http://arxiv.org/abs/2006.08397v1
- Date: Mon, 15 Jun 2020 13:40:07 GMT
- Title: Nearly Linear Row Sampling Algorithm for Quantile Regression
- Authors: Yi Li, Ruosong Wang, Lin Yang, Hanrui Zhang
- Abstract summary: We give a row sampling algorithm for the quantile loss function with sample complexity nearly linear in the dimensionality of the data.
Based upon our row sampling algorithm, we give the fastest known algorithm for quantile regression and a graph sparsification algorithm for balanced directed graphs.
- Score: 54.75919082407094
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We give a row sampling algorithm for the quantile loss function with sample
complexity nearly linear in the dimensionality of the data, improving upon the
previous best algorithm whose sampling complexity has at least cubic dependence
on the dimensionality. Based upon our row sampling algorithm, we give the
fastest known algorithm for quantile regression and a graph sparsification
algorithm for balanced directed graphs. Our main technical contribution is to
show that Lewis weights sampling, which has been used in row sampling
algorithms for $\ell_p$ norms, can also be applied in row sampling algorithms
for a variety of loss functions. We complement our theoretical results by
experiments to demonstrate the practicality of our approach.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Bregman-divergence-based Arimoto-Blahut algorithm [53.64687146666141]
We generalize the Arimoto-Blahut algorithm to a general function defined over Bregman-divergence system.
We propose a convex-optimization-free algorithm that can be applied to classical and quantum rate-distortion theory.
arXiv Detail & Related papers (2024-08-10T06:16:24Z) - Efficient Numerical Algorithm for Large-Scale Damped Natural Gradient
Descent [7.368877979221163]
We propose a new algorithm for efficiently solving the damped Fisher matrix in large-scale scenarios where the number of parameters significantly exceeds the number of available samples.
Our algorithm is based on Cholesky decomposition and is generally applicable. Benchmark results show that the algorithm is significantly faster than existing methods.
arXiv Detail & Related papers (2023-10-26T16:46:13Z) - Solving Linear Inverse Problems Provably via Posterior Sampling with
Latent Diffusion Models [98.95988351420334]
We present the first framework to solve linear inverse problems leveraging pre-trained latent diffusion models.
We theoretically analyze our algorithm showing provable sample recovery in a linear model setting.
arXiv Detail & Related papers (2023-07-02T17:21:30Z) - Learning Rate Free Sampling in Constrained Domains [21.853333421463603]
We introduce a suite of new particle-based algorithms for sampling in constrained domains which are entirely learning rate free.
We demonstrate the performance of our algorithms on a range of numerical examples, including sampling from targets on the simplex.
arXiv Detail & Related papers (2023-05-24T09:31:18Z) - Quantum Algorithm for Path-Edge Sampling [0.9990687944474739]
We present a quantum algorithm for sampling an edge on a path between two nodes s and t in an undirected graph given as an adjacency matrix.
We use this path sampling algorithm as a subroutine for st-path finding and st-cut-set finding algorithms in some specific cases.
arXiv Detail & Related papers (2023-03-06T17:45:12Z) - Optimizing Objective Functions from Trained ReLU Neural Networks via
Sampling [0.0]
This paper introduces scalable, sampling-based algorithms that optimize trained neural networks with ReLU activations.
We first propose an iterative algorithm that takes advantage of the piecewise linear structure of ReLU neural networks.
We then extend this approach by searching around the neighborhood of the LP solution computed at each iteration.
arXiv Detail & Related papers (2022-05-27T18:35:48Z) - Classical simulation of boson sampling based on graph structure [2.5496329090462626]
We present classical sampling algorithms for single-photon and Gaussian input states that take advantage of a graph structure of a linear-optical circuit.
We show that when the circuit depth is less than the quadratic in the lattice spacing, the efficient simulation is possible with an exponentially small error.
We implement a likelihood test with a recent numerically Gaussian boson sampling experiment and show that the treewidth-based algorithm with a limited treewidth renders a larger likelihood than the experimental data.
arXiv Detail & Related papers (2021-10-04T17:02:35Z) - Non-Adaptive Adaptive Sampling on Turnstile Streams [57.619901304728366]
We give the first relative-error algorithms for column subset selection, subspace approximation, projective clustering, and volume on turnstile streams that use space sublinear in $n$.
Our adaptive sampling procedure has a number of applications to various data summarization problems that either improve state-of-the-art or have only been previously studied in the more relaxed row-arrival model.
arXiv Detail & Related papers (2020-04-23T05:00:21Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.