An Improved Frequent Directions Algorithm for Low-Rank Approximation via
Block Krylov Iteration
- URL: http://arxiv.org/abs/2109.11703v1
- Date: Fri, 24 Sep 2021 01:36:42 GMT
- Title: An Improved Frequent Directions Algorithm for Low-Rank Approximation via
Block Krylov Iteration
- Authors: Chenhao Wang, Qianxin Yi, Xiuwu Liao, Yao Wang
- Abstract summary: This paper presents a fast and accurate Frequent Directions algorithm named as r-BKIFD.
The proposed r-BKIFD has a comparable error bound with original Frequent Directions, and the approximation error can be arbitrarily small when the number of iterations is chosen appropriately.
- Score: 11.62834880315581
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Frequent Directions, as a deterministic matrix sketching technique, has been
proposed for tackling low-rank approximation problems. This method has a high
degree of accuracy and practicality, but experiences a lot of computational
cost for large-scale data. Several recent works on the randomized version of
Frequent Directions greatly improve the computational efficiency, but
unfortunately sacrifice some precision. To remedy such issue, this paper aims
to find a more accurate projection subspace to further improve the efficiency
and effectiveness of the existing Frequent Directions techniques. Specifically,
by utilizing the power of Block Krylov Iteration and random projection
technique, this paper presents a fast and accurate Frequent Directions
algorithm named as r-BKIFD. The rigorous theoretical analysis shows that the
proposed r-BKIFD has a comparable error bound with original Frequent
Directions, and the approximation error can be arbitrarily small when the
number of iterations is chosen appropriately. Extensive experimental results on
both synthetic and real data further demonstrate the superiority of r-BKIFD
over several popular Frequent Directions algorithms, both in terms of
computational efficiency and accuracy.
Related papers
- Derivative-Free Optimization via Finite Difference Approximation: An Experimental Study [1.3886390523644807]
Derivative-free optimization (DFO) is vital in solving complex optimization problems where only noisy function evaluations are available through an oracle.
Two classical iteration approaches are Kiefer-Wolfowitz (KW) and simultaneous perturbation approximation (SPSA) algorithms.
This paper conducts a comprehensive experimental comparison among these approaches.
arXiv Detail & Related papers (2024-10-31T18:07:44Z) - Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment [81.84950252537618]
This paper reveals a unified game-theoretic connection between iterative BOND and self-play alignment.
We establish a novel framework, WIN rate Dominance (WIND), with a series of efficient algorithms for regularized win rate dominance optimization.
arXiv Detail & Related papers (2024-10-28T04:47:39Z) - Machine Learning Training Optimization using the Barycentric Correction
Procedure [0.0]
This study proposes combining machine learning algorithms with an efficient methodology known as the barycentric correction procedure (BCP)
It was found that this combination provides significant benefits related to time in synthetic and real data without losing accuracy when the number of instances and dimensions increases.
arXiv Detail & Related papers (2024-03-01T13:56:36Z) - Efficient Numerical Algorithm for Large-Scale Damped Natural Gradient
Descent [7.368877979221163]
We propose a new algorithm for efficiently solving the damped Fisher matrix in large-scale scenarios where the number of parameters significantly exceeds the number of available samples.
Our algorithm is based on Cholesky decomposition and is generally applicable. Benchmark results show that the algorithm is significantly faster than existing methods.
arXiv Detail & Related papers (2023-10-26T16:46:13Z) - Large-scale Optimization of Partial AUC in a Range of False Positive
Rates [51.12047280149546]
The area under the ROC curve (AUC) is one of the most widely used performance measures for classification models in machine learning.
We develop an efficient approximated gradient descent method based on recent practical envelope smoothing technique.
Our proposed algorithm can also be used to minimize the sum of some ranked range loss, which also lacks efficient solvers.
arXiv Detail & Related papers (2022-03-03T03:46:18Z) - Effective Streaming Low-tubal-rank Tensor Approximation via Frequent
Directions [9.43704219585568]
This paper extends a popular matrix sketching technique, namely Frequent Directions, for constructing an efficient and accurate low-tubal-rank tensor approximation.
Specifically, the new algorithm allows the tensor data to be observed slice by slice, but only needs to maintain and incrementally update a much smaller sketch.
The rigorous theoretical analysis shows that the approximation error of the new algorithm can be arbitrarily small when the sketch size grows linearly.
arXiv Detail & Related papers (2021-08-23T12:53:44Z) - Learned Block Iterative Shrinkage Thresholding Algorithm for
Photothermal Super Resolution Imaging [52.42007686600479]
We propose a learned block-sparse optimization approach using an iterative algorithm unfolded into a deep neural network.
We show the benefits of using a learned block iterative shrinkage thresholding algorithm that is able to learn the choice of regularization parameters.
arXiv Detail & Related papers (2020-12-07T09:27:16Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - Approximation Algorithms for Sparse Principal Component Analysis [57.5357874512594]
Principal component analysis (PCA) is a widely used dimension reduction technique in machine learning and statistics.
Various approaches to obtain sparse principal direction loadings have been proposed, which are termed Sparse Principal Component Analysis.
We present thresholding as a provably accurate, time, approximation algorithm for the SPCA problem.
arXiv Detail & Related papers (2020-06-23T04:25:36Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.