Difference vs. Quotient: A Novel Algorithm for Dominant Eigenvalue Problem
- URL: http://arxiv.org/abs/2501.15131v1
- Date: Sat, 25 Jan 2025 08:37:17 GMT
- Title: Difference vs. Quotient: A Novel Algorithm for Dominant Eigenvalue Problem
- Authors: Xiaozhi Liu, Yong Xia,
- Abstract summary: This paper introduces a novel perspective by reformulating the eigenvalue problem using an unconstrained Difference formulation.
Within this family, we propose the Split-Merge algorithm, which achieves maximal acceleration without spectral prior knowledge.
- Score: 6.938695246282628
- License:
- Abstract: The computation of the dominant eigenvector of symmetric positive semidefinite matrices is a cornerstone operation in numerous machine learning applications. Traditional approaches predominantly rely on the constrained Quotient formulation, which underpins most existing methods. However, these methods often suffer from challenges related to computational efficiency and dependence on spectral prior knowledge. This paper introduces a novel perspective by reformulating the eigenvalue problem using an unconstrained Difference formulation. This new approach sheds light on classical methods, revealing that the power method can be interpreted as a specific instance of Difference of Convex Algorithms. Building on this insight, we develop a generalized family of Difference-Type methods, which encompasses the power method as a special case. Within this family, we propose the Split-Merge algorithm, which achieves maximal acceleration without spectral prior knowledge and operates solely through matrix-vector products, making it both efficient and easy to implement. Extensive empirical evaluations on both synthetic and real-world datasets highlight that the Split-Merge algorithm achieves over a $\boldsymbol{10\times}$ speedup compared to the basic power method, offering significant advancements in efficiency and practicality for large-scale machine learning problems.
Related papers
- Nonlinear Operator Learning Using Energy Minimization and MLPs [0.5898893619901381]
We develop and evaluate a method for learning solution operators to nonlinear problems governed by partial differential equations.
The approach is based on a finite element discretization and aims at representing the solution operator by an iteration that takes latent variables as input.
arXiv Detail & Related papers (2024-12-05T20:19:16Z) - The Stochastic Conjugate Subgradient Algorithm For Kernel Support Vector Machines [1.738375118265695]
This paper proposes an innovative method specifically designed for kernel support vector machines (SVMs)
It not only achieves faster iteration per iteration but also exhibits enhanced convergence when compared to conventional SFO techniques.
Our experimental results demonstrate that the proposed algorithm not only maintains but potentially exceeds the scalability of SFO methods.
arXiv Detail & Related papers (2024-07-30T17:03:19Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - One-step differentiation of iterative algorithms [7.9495796547433395]
We study one-step differentiation, also known as Jacobian-free backpropagation, a method as easy as automatic differentiation.
We provide a complete theoretical approximation analysis with specific examples along with its consequences in bilevel optimization.
arXiv Detail & Related papers (2023-05-23T07:32:37Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Making Linear MDPs Practical via Contrastive Representation Learning [101.75885788118131]
It is common to address the curse of dimensionality in Markov decision processes (MDPs) by exploiting low-rank representations.
We consider an alternative definition of linear MDPs that automatically ensures normalization while allowing efficient representation learning.
We demonstrate superior performance over existing state-of-the-art model-based and model-free algorithms on several benchmarks.
arXiv Detail & Related papers (2022-07-14T18:18:02Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - A Discrete Variational Derivation of Accelerated Methods in Optimization [68.8204255655161]
We introduce variational which allow us to derive different methods for optimization.
We derive two families of optimization methods in one-to-one correspondence.
The preservation of symplecticity of autonomous systems occurs here solely on the fibers.
arXiv Detail & Related papers (2021-06-04T20:21:53Z) - Approximation Algorithms for Sparse Principal Component Analysis [57.5357874512594]
Principal component analysis (PCA) is a widely used dimension reduction technique in machine learning and statistics.
Various approaches to obtain sparse principal direction loadings have been proposed, which are termed Sparse Principal Component Analysis.
We present thresholding as a provably accurate, time, approximation algorithm for the SPCA problem.
arXiv Detail & Related papers (2020-06-23T04:25:36Z) - Statistically Guided Divide-and-Conquer for Sparse Factorization of
Large Matrix [2.345015036605934]
We formulate the statistical problem as a sparse factor regression and tackle it with a divide-conquer approach.
In the first stage division, we consider both latent parallel approaches for simplifying the task into a set of co-parsesparserank estimation (CURE) problems.
In the second stage division, we innovate a stagewise learning technique, consisting of a sequence simple incremental paths, to efficiently trace out the whole solution of CURE.
arXiv Detail & Related papers (2020-03-17T19:12:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.