Sketch-and-Solve: Optimized Overdetermined Least-Squares Using Randomized Numerical Linear Algebra
- URL: http://arxiv.org/abs/2409.14309v1
- Date: Sun, 22 Sep 2024 04:29:51 GMT
- Title: Sketch-and-Solve: Optimized Overdetermined Least-Squares Using Randomized Numerical Linear Algebra
- Authors: Alex Lavaee,
- Abstract summary: This paper focuses on applying sketch-and-solve algorithms to efficiently solve the overdetermined least squares problem.
We introduce the Sketch-and-Apply (SAA-SAS) algorithm, which leverages randomized numerical linear algebra techniques to compute approximate solutions efficiently.
Our results highlight the potential of sketch-and-solve techniques in efficiently handling large-scale numerical linear algebra problems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sketch-and-solve is a powerful paradigm for tackling large-scale computational problems by reducing their dimensionality using sketching matrices. This paper focuses on applying sketch-and-solve algorithms to efficiently solve the overdetermined least squares problem, which is fundamental in various domains such as machine learning, signal processing, and numerical optimization. We provide a comprehensive overview of the sketch-and-solve paradigm and analyze different sketching operators, including dense and sparse variants. We introduce the Sketch-and-Apply (SAA-SAS) algorithm, which leverages randomized numerical linear algebra techniques to compute approximate solutions efficiently. Through extensive experiments on large-scale least squares problems, we demonstrate that our proposed approach significantly outperforms the traditional Least-Squares QR (LSQR) algorithm in terms of runtime while maintaining comparable accuracy. Our results highlight the potential of sketch-and-solve techniques in efficiently handling large-scale numerical linear algebra problems.
Related papers
- Sketching for Convex and Nonconvex Regularized Least Squares with Sharp
Guarantees [23.614074988517864]
In this paper, we propose a fast approximation for least square problems regularized by or convex non regularization functions.
Results are among the first to demonstrate minimax rates for estimation by solving sketched convex or non approximation problems.
arXiv Detail & Related papers (2023-11-03T09:35:01Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Learning the Positions in CountSketch [49.57951567374372]
We consider sketching algorithms which first compress data by multiplication with a random sketch matrix, and then apply the sketch to quickly solve an optimization problem.
In this work, we propose the first learning-based algorithms that also optimize the locations of the non-zero entries.
arXiv Detail & Related papers (2023-06-11T07:28:35Z) - Learning Linear Models Using Distributed Iterative Hessian Sketching [4.567810220723372]
We consider the problem of learning the Markov parameters of a linear system from observed data.
We show that a randomized and distributed Newton algorithm based on Hessian-sketching can produce $epsilon$-optimal solutions.
arXiv Detail & Related papers (2021-12-08T04:07:23Z) - Fast Convex Quadratic Optimization Solvers with Adaptive Sketching-based
Preconditioners [37.03247707259297]
We consider least-squares problems with quadratic regularization and propose novel sketching-based iterative methods with an adaptive sketch size.
The sketch size can be as small as the effective dimension of the data matrix to guarantee linear convergence.
A major difficulty in choosing the sketch size in terms of the effective dimension lies in the fact that the latter is usually unknown in practice.
arXiv Detail & Related papers (2021-04-29T04:36:41Z) - Learning-Augmented Sketches for Hessians [54.97773807211337]
We show how to design learned sketches for the Hessian in the context of second order methods.
We show empirically that learned sketches, compared with their "non-learned" counterparts, improve the approximation accuracy for important problems.
arXiv Detail & Related papers (2021-02-24T14:50:59Z) - Precise expressions for random projections: Low-rank approximation and
randomized Newton [63.68433510953756]
Matrix sketching has emerged as a powerful technique for performing such dimensionality reduction very efficiently.
We develop techniques that provide provably accurate expressions for the expected value of random projection matrices obtained via sketching.
These expressions can be used to characterize the performance of dimensionality reduction in a variety of common machine learning tasks.
arXiv Detail & Related papers (2020-06-18T16:23:00Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.